Merge branch 'master' of github.com:harijayms/terraform

This commit is contained in:
Scott Nowicki 2017-04-24 18:33:27 -05:00
commit 5d4e0490ae
161 changed files with 16072 additions and 282 deletions

View File

@ -1,10 +1,21 @@
## 0.9.4 (Unreleased)
BACKWARDS INCOMPATIBILITIES / NOTES:
* provider/template: Fix invalid MIME formatting in `template_cloudinit_config`.
While the change itself is not breaking the data source it may be referenced
e.g. in `aws_launch_configuration` and similar resources which are immutable
and the formatting change will therefore trigger recreation [GH-13752]
FEATURES:
* **New Provider:** `opc` - Oracle Public Cloud [GH-13468]
* **New Provider:** `oneandone` [GH-13633]
* **New Data Source:** `aws_ami_ids` [GH-13844]
* **New Data Source:** `aws_ebs_snapshot_ids` [GH-13844]
* **New Data Source:** `aws_kms_alias` [GH-13669]
* **New Data Source:** `aws_kinesis_stream` [GH-13562]
* **New Data Source:** `digitalocean_image` [GH-13787]
* **New Data Source:** `google_compute_network` [GH-12442]
* **New Data Source:** `google_compute_subnetwork` [GH-12442]
* **New Resource:** `local_file` for creating local files (please see the docs for caveats) [GH-12757]
@ -14,20 +25,35 @@ FEATURES:
* **New Resource:** `alicloud_ess_schedule` [GH-13731]
* **New Resource:** `alicloud_snat_entry` [GH-13731]
* **New Resource:** `alicloud_forward_entry` [GH-13731]
* **New Resource:** `aws_cognito_identity_pool` [GH-13783]
* **New Resource:**  `aws_network_interface_attachment` [GH-13861]
* **New Resource:** `github_branch_protection` [GH-10476]
* **New Resource:** `google_bigquery_dataset` [GH-13436]
* **New Interpolation Function:** `coalescelist()` [GH-12537]
IMPROVEMENTS:
* helper/schema: Disallow validation+diff suppression on computed fields [GH-13878]
* config: The interpolation function `cidrhost` now accepts a negative host number to count backwards from the end of the range [GH-13765]
* config: New interpolation function `matchkeys` for using values from one list to filter corresponding values from another list using a matching set. [GH-13847]
* state/remote/swift: Support Openstack request logging [GH-13583]
* provider/aws: Add an option to skip getting the supported EC2 platforms [GH-13672]
* provider/aws: Add `name_prefix` support to `aws_cloudwatch_log_group` [GH-13273]
* provider/aws: Add `bucket_prefix` to `aws_s3_bucket` [GH-13274]
* provider/aws: Add replica_source_db to the aws_db_instance datasource [GH-13842]
* provider/aws: Add IPv6 outputs to aws_subnet datasource [GH-13841]
* provider/aws: Exercise SecondaryPrivateIpAddressCount for network interface [GH-10590]
* provider/aws: Expose execution ARN + invoke URL for APIG deployment [GH-13889]
* provider/aws: Expose invoke ARN from Lambda function (for API Gateway) [GH-13890]
* provider/aws: Add tagging support to the 'aws_lambda_function' resource [GH-13873]
* provider/aws: Validate WAF metric names [GH-13885]
* provider/aws: Allow AWS Subnet to change IPv6 CIDR Block without ForceNew [GH-13909]
* provider/azurerm: VM Scale Sets - import support [GH-13464]
* provider/azurerm: Allow Azure China region support [GH-13767]
* provider/digitalocean: Export droplet prices [GH-13720]
* provider/fastly: Add support for GCS logging [GH-13553]
* provider/google: `google_compute_address` and `google_compute_global_address` are now importable [GH-13270]
* provider/google: `google_compute_network` is now importable [GH-13834]
* provider/vault: `vault_generic_secret` resource can now optionally detect drift if it has appropriate access [GH-11776]
BUG FIXES:
@ -38,21 +64,31 @@ BUG FIXES:
* provider/alicloud: Fix allocate public ip error [GH-13268]
* provider/alicloud: alicloud_security_group_rule: check ptr before use it [GH-13731)
* provider/alicloud: alicloud_instance: fix ecs internet_max_bandwidth_out cannot set zero bug [GH-13731]
* provider/aws: Allow force-destroying `aws_route53_zone` which has trailing dot [GH-12421]
* provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes [GH-13699]
* provider/aws: Changing aws_opsworks_instance should ForceNew [GH-13839]
* provider/aws: Fix DB Parameter Group Name [GH-13279]
* provider/aws: Fix issue importing some Security Groups and Rules based on rule structure [GH-13630]
* provider/aws: Fix issue for cross account IAM role with `aws_lambda_permission` [GH-13865]
* provider/aws: Fix WAF IPSet descriptors removal on update [GH-13766]
* provider/aws: Increase default number of retries from 11 to 25 [GH-13673]
* provider/aws: Use mutex & retry for WAF change operations [GH-13656]
* provider/aws: Remove aws_vpc_dhcp_options if not found [GH-13610]
* provider/aws: Remove aws_network_acl_rule if not found [GH-13608]
* provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes [GH-13699]
* provider/aws: Use mutex & retry for WAF change operations [GH-13656]
* provider/aws: Adding support for ipv6 to aws_subnets needs migration [GH-13876]
* provider/azurerm: azurerm_redis_cache resource missing hostname [GH-13650]
* provider/azurerm: Locking around Network Security Group / Subnets [GH-13637]
* provider/azurerm: Locking route table on subnet create/delete [GH-13791]
* provider/azurerm: VM's - fixes a bug where ssh_keys could contain a null entry [GH-13755]
* provider/azurerm: fixing a bug refreshing the `azurerm_redis_cache` [GH-13899]
* provider/fastly: Fix issue with using 0 for `default_ttl` [GH-13648]
* provider/fastly: Add ability to associate a healthcheck to a backend [GH-13539]
* provider/google: Stop setting the id when project creation fails [GH-13644]
* provider/google: Make ports in resource_compute_forwarding_rule ForceNew [GH-13833]
* provider/logentries: Refresh from state when resources not found [GH-13810]
* provider/newrelic: newrelic_alert_condition - `condition_scope` must be `application` or `instance` [GH-12972]
* provider/opc: fixed issue with unqualifying nats [GH-13826]
* provider/opc: Fix instance label if unset [GH-13846]
* provider/openstack: Fix updating Ports [GH-13604]
* provider/rabbitmq: Allow users without tags [GH-13798]

View File

@ -28,6 +28,7 @@ import (
"github.com/aws/aws-sdk-go/service/codecommit"
"github.com/aws/aws-sdk-go/service/codedeploy"
"github.com/aws/aws-sdk-go/service/codepipeline"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/aws/aws-sdk-go/service/configservice"
"github.com/aws/aws-sdk-go/service/databasemigrationservice"
"github.com/aws/aws-sdk-go/service/directoryservice"
@ -111,6 +112,7 @@ type AWSClient struct {
cloudwatchconn *cloudwatch.CloudWatch
cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs
cloudwatcheventsconn *cloudwatchevents.CloudWatchEvents
cognitoconn *cognitoidentity.CognitoIdentity
configconn *configservice.ConfigService
dmsconn *databasemigrationservice.DatabaseMigrationService
dsconn *directoryservice.DirectoryService
@ -306,6 +308,7 @@ func (c *Config) Client() (interface{}, error) {
client.codebuildconn = codebuild.New(sess)
client.codedeployconn = codedeploy.New(sess)
client.configconn = configservice.New(sess)
client.cognitoconn = cognitoidentity.New(sess)
client.dmsconn = databasemigrationservice.New(sess)
client.codepipelineconn = codepipeline.New(sess)
client.dsconn = directoryservice.New(sess)

View File

@ -0,0 +1,112 @@
package aws
import (
"fmt"
"log"
"regexp"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsAmiIds() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsAmiIdsRead,
Schema: map[string]*schema.Schema{
"filter": dataSourceFiltersSchema(),
"executable_users": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"name_regex": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ValidateFunc: validateNameRegex,
},
"owners": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"tags": dataSourceTagsSchema(),
"ids": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
}
}
func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
executableUsers, executableUsersOk := d.GetOk("executable_users")
filters, filtersOk := d.GetOk("filter")
nameRegex, nameRegexOk := d.GetOk("name_regex")
owners, ownersOk := d.GetOk("owners")
if executableUsersOk == false && filtersOk == false && nameRegexOk == false && ownersOk == false {
return fmt.Errorf("One of executable_users, filters, name_regex, or owners must be assigned")
}
params := &ec2.DescribeImagesInput{}
if executableUsersOk {
params.ExecutableUsers = expandStringList(executableUsers.([]interface{}))
}
if filtersOk {
params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set))
}
if ownersOk {
o := expandStringList(owners.([]interface{}))
if len(o) > 0 {
params.Owners = o
}
}
resp, err := conn.DescribeImages(params)
if err != nil {
return err
}
var filteredImages []*ec2.Image
imageIds := make([]string, 0)
if nameRegexOk {
r := regexp.MustCompile(nameRegex.(string))
for _, image := range resp.Images {
// Check for a very rare case where the response would include no
// image name. No name means nothing to attempt a match against,
// therefore we are skipping such image.
if image.Name == nil || *image.Name == "" {
log.Printf("[WARN] Unable to find AMI name to match against "+
"for image ID %q owned by %q, nothing to do.",
*image.ImageId, *image.OwnerId)
continue
}
if r.MatchString(*image.Name) {
filteredImages = append(filteredImages, image)
}
}
} else {
filteredImages = resp.Images[:]
}
for _, image := range filteredImages {
imageIds = append(imageIds, *image.ImageId)
}
d.SetId(fmt.Sprintf("%d", hashcode.String(params.String())))
d.Set("ids", imageIds)
return nil
}

View File

@ -0,0 +1,58 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDataSourceAwsAmiIds_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsAmiIdsConfig_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.ubuntu"),
),
},
},
})
}
func TestAccDataSourceAwsAmiIds_empty(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsAmiIdsConfig_empty,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.empty"),
resource.TestCheckResourceAttr("data.aws_ami_ids.empty", "ids.#", "0"),
),
},
},
})
}
const testAccDataSourceAwsAmiIdsConfig_basic = `
data "aws_ami_ids" "ubuntu" {
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"]
}
}
`
const testAccDataSourceAwsAmiIdsConfig_empty = `
data "aws_ami_ids" "empty" {
filter {
name = "name"
values = []
}
}
`

View File

@ -188,6 +188,11 @@ func dataSourceAwsDbInstance() *schema.Resource {
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"replicate_source_db": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
@ -271,6 +276,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error
d.Set("storage_encrypted", dbInstance.StorageEncrypted)
d.Set("storage_type", dbInstance.StorageType)
d.Set("timezone", dbInstance.Timezone)
d.Set("replicate_source_db", dbInstance.ReadReplicaSourceDBInstanceIdentifier)
var vpcSecurityGroups []string
for _, v := range dbInstance.VpcSecurityGroups {

View File

@ -0,0 +1,78 @@
package aws
import (
"fmt"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceAwsEbsSnapshotIds() *schema.Resource {
return &schema.Resource{
Read: dataSourceAwsEbsSnapshotIdsRead,
Schema: map[string]*schema.Schema{
"filter": dataSourceFiltersSchema(),
"owners": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"restorable_by_user_ids": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"tags": dataSourceTagsSchema(),
"ids": &schema.Schema{
Type: schema.TypeSet,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
},
}
}
func dataSourceAwsEbsSnapshotIdsRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
restorableUsers, restorableUsersOk := d.GetOk("restorable_by_user_ids")
filters, filtersOk := d.GetOk("filter")
owners, ownersOk := d.GetOk("owners")
if restorableUsers == false && filtersOk == false && ownersOk == false {
return fmt.Errorf("One of filters, restorable_by_user_ids, or owners must be assigned")
}
params := &ec2.DescribeSnapshotsInput{}
if restorableUsersOk {
params.RestorableByUserIds = expandStringList(restorableUsers.([]interface{}))
}
if filtersOk {
params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set))
}
if ownersOk {
params.OwnerIds = expandStringList(owners.([]interface{}))
}
resp, err := conn.DescribeSnapshots(params)
if err != nil {
return err
}
snapshotIds := make([]string, 0)
for _, snapshot := range resp.Snapshots {
snapshotIds = append(snapshotIds, *snapshot.SnapshotId)
}
d.SetId(fmt.Sprintf("%d", hashcode.String(params.String())))
d.Set("ids", snapshotIds)
return nil
}

View File

@ -0,0 +1,59 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_basic,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"),
),
},
},
})
}
func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsEbsSnapshotIdsConfig_empty,
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.empty"),
resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.empty", "ids.#", "0"),
),
},
},
})
}
const testAccDataSourceAwsEbsSnapshotIdsConfig_basic = `
resource "aws_ebs_volume" "test" {
availability_zone = "us-west-2a"
size = 40
}
resource "aws_ebs_snapshot" "test" {
volume_id = "${aws_ebs_volume.test.id}"
}
data "aws_ebs_snapshot_ids" "test" {
owners = ["self"]
}
`
const testAccDataSourceAwsEbsSnapshotIdsConfig_empty = `
data "aws_ebs_snapshot_ids" "empty" {
owners = ["000000000000"]
}
`

View File

@ -44,7 +44,7 @@ func testAccCheckAwsEbsSnapshotDataSourceID(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Can't find Volume data source: %s", n)
return fmt.Errorf("Can't find snapshot data source: %s", n)
}
if rs.Primary.ID == "" {

View File

@ -14,19 +14,25 @@ func dataSourceAwsSubnet() *schema.Resource {
Read: dataSourceAwsSubnetRead,
Schema: map[string]*schema.Schema{
"availability_zone": &schema.Schema{
"availability_zone": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"cidr_block": &schema.Schema{
"cidr_block": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"default_for_az": &schema.Schema{
"ipv6_cidr_block": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"default_for_az": {
Type: schema.TypeBool,
Optional: true,
Computed: true,
@ -34,13 +40,13 @@ func dataSourceAwsSubnet() *schema.Resource {
"filter": ec2CustomFiltersSchema(),
"id": &schema.Schema{
"id": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"state": &schema.Schema{
"state": {
Type: schema.TypeString,
Optional: true,
Computed: true,
@ -48,11 +54,26 @@ func dataSourceAwsSubnet() *schema.Resource {
"tags": tagsSchemaComputed(),
"vpc_id": &schema.Schema{
"vpc_id": {
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"assign_ipv6_address_on_creation": {
Type: schema.TypeBool,
Computed: true,
},
"map_public_ip_on_launch": {
Type: schema.TypeBool,
Computed: true,
},
"ipv6_cidr_block_association_id": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
@ -76,15 +97,22 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
defaultForAzStr = "true"
}
req.Filters = buildEC2AttributeFilterList(
map[string]string{
"availabilityZone": d.Get("availability_zone").(string),
"cidrBlock": d.Get("cidr_block").(string),
"defaultForAz": defaultForAzStr,
"state": d.Get("state").(string),
"vpc-id": d.Get("vpc_id").(string),
},
)
filters := map[string]string{
"availabilityZone": d.Get("availability_zone").(string),
"defaultForAz": defaultForAzStr,
"state": d.Get("state").(string),
"vpc-id": d.Get("vpc_id").(string),
}
if v, ok := d.GetOk("cidr_block"); ok {
filters["cidrBlock"] = v.(string)
}
if v, ok := d.GetOk("ipv6_cidr_block"); ok {
filters["ipv6-cidr-block-association.ipv6-cidr-block"] = v.(string)
}
req.Filters = buildEC2AttributeFilterList(filters)
req.Filters = append(req.Filters, buildEC2TagFilterList(
tagsFromMap(d.Get("tags").(map[string]interface{})),
)...)
@ -118,6 +146,15 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
d.Set("default_for_az", subnet.DefaultForAz)
d.Set("state", subnet.State)
d.Set("tags", tagsToMap(subnet.Tags))
d.Set("assign_ipv6_address_on_creation", subnet.AssignIpv6AddressOnCreation)
d.Set("map_public_ip_on_launch", subnet.MapPublicIpOnLaunch)
for _, a := range subnet.Ipv6CidrBlockAssociationSet {
if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once
d.Set("ipv6_cidr_block_association_id", a.AssociationId)
d.Set("ipv6_cidr_block", a.Ipv6CidrBlock)
}
}
return nil
}

View File

@ -9,7 +9,7 @@ import (
"github.com/hashicorp/terraform/terraform"
)
func TestAccDataSourceAwsSubnet_basic(t *testing.T) {
func TestAccDataSourceAwsSubnet(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
@ -17,7 +17,7 @@ func TestAccDataSourceAwsSubnet_basic(t *testing.T) {
Providers: testAccProviders,
CheckDestroy: testAccCheckVpcDestroy,
Steps: []resource.TestStep{
resource.TestStep{
{
Config: testAccDataSourceAwsSubnetConfig(rInt),
Check: resource.ComposeTestCheckFunc(
testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_id", rInt),
@ -31,6 +31,48 @@ func TestAccDataSourceAwsSubnet_basic(t *testing.T) {
})
}
func TestAccDataSourceAwsSubnetIpv6ByIpv6Filter(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsSubnetConfigIpv6(rInt),
},
{
Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"),
resource.TestCheckResourceAttrSet(
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block"),
),
},
},
})
}
func TestAccDataSourceAwsSubnetIpv6ByIpv6CidrBlock(t *testing.T) {
rInt := acctest.RandIntRange(0, 256)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
Steps: []resource.TestStep{
{
Config: testAccDataSourceAwsSubnetConfigIpv6(rInt),
},
{
Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt),
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttrSet(
"data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"),
),
},
},
})
}
func testAccDataSourceAwsSubnetCheck(name string, rInt int) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[name]
@ -103,6 +145,7 @@ func testAccDataSourceAwsSubnetConfig(rInt int) string {
}
}
data "aws_subnet" "by_id" {
id = "${aws_subnet.test.id}"
}
@ -129,3 +172,86 @@ func testAccDataSourceAwsSubnetConfig(rInt int) string {
}
`, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetConfigIpv6(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "test" {
cidr_block = "172.%d.0.0/16"
assign_generated_ipv6_cidr_block = true
tags {
Name = "terraform-testacc-subnet-data-source-ipv6"
}
}
resource "aws_subnet" "test" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
tags {
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
}
}
`, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "test" {
cidr_block = "172.%d.0.0/16"
assign_generated_ipv6_cidr_block = true
tags {
Name = "terraform-testacc-subnet-data-source-ipv6"
}
}
resource "aws_subnet" "test" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
tags {
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
}
}
data "aws_subnet" "by_ipv6_cidr" {
filter {
name = "ipv6-cidr-block-association.ipv6-cidr-block"
values = ["${aws_subnet.test.ipv6_cidr_block}"]
}
}
`, rInt, rInt, rInt)
}
func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "test" {
cidr_block = "172.%d.0.0/16"
assign_generated_ipv6_cidr_block = true
tags {
Name = "terraform-testacc-subnet-data-source-ipv6"
}
}
resource "aws_subnet" "test" {
vpc_id = "${aws_vpc.test.id}"
cidr_block = "172.%d.123.0/24"
availability_zone = "us-west-2a"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}"
tags {
Name = "terraform-testacc-subnet-data-sourceipv6-%d"
}
}
data "aws_subnet" "by_ipv6_cidr" {
ipv6_cidr_block = "${aws_subnet.test.ipv6_cidr_block}"
}
`, rInt, rInt, rInt)
}

View File

@ -111,11 +111,11 @@ func ec2CustomFiltersSchema() *schema.Schema {
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"values": &schema.Schema{
"values": {
Type: schema.TypeSet,
Required: true,
Elem: &schema.Schema{

View File

@ -0,0 +1,30 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSCognitoIdentityPool_importBasic(t *testing.T) {
resourceName := "aws_cognito_identity_pool.main"
rName := acctest.RandString(10)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(rName),
},
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -50,36 +50,67 @@ func resourceAwsSecurityGroupImportState(
}
func resourceAwsSecurityGroupImportStatePerm(sg *ec2.SecurityGroup, ruleType string, perm *ec2.IpPermission) ([]*schema.ResourceData, error) {
/*
Create a seperate Security Group Rule for:
* The collection of IpRanges (cidr_blocks)
* The collection of Ipv6Ranges (ipv6_cidr_blocks)
* Each individual UserIdGroupPair (source_security_group_id)
If, for example, a security group has rules for:
* 2 IpRanges
* 2 Ipv6Ranges
* 2 UserIdGroupPairs
This would generate 4 security group rules:
* 1 for the collection of IpRanges
* 1 for the collection of Ipv6Ranges
* 1 for the first UserIdGroupPair
* 1 for the second UserIdGroupPair
*/
var result []*schema.ResourceData
if len(perm.UserIdGroupPairs) == 0 {
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, perm)
if perm.IpRanges != nil {
p := &ec2.IpPermission{
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
IpRanges: perm.IpRanges,
}
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p)
if err != nil {
return nil, err
}
result = append(result, r)
} else {
// If the rule contained more than one source security group, this
// will iterate over them and create one rule for each
// source security group.
}
if perm.Ipv6Ranges != nil {
p := &ec2.IpPermission{
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
Ipv6Ranges: perm.Ipv6Ranges,
}
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p)
if err != nil {
return nil, err
}
result = append(result, r)
}
if len(perm.UserIdGroupPairs) > 0 {
for _, pair := range perm.UserIdGroupPairs {
p := &ec2.IpPermission{
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
FromPort: perm.FromPort,
IpProtocol: perm.IpProtocol,
PrefixListIds: perm.PrefixListIds,
ToPort: perm.ToPort,
UserIdGroupPairs: []*ec2.UserIdGroupPair{pair},
}
if perm.Ipv6Ranges != nil {
p.Ipv6Ranges = perm.Ipv6Ranges
}
if perm.IpRanges != nil {
p.IpRanges = perm.IpRanges
}
r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p)
if err != nil {
return nil, err

View File

@ -101,3 +101,59 @@ func TestAccAWSSecurityGroup_importSourceSecurityGroup(t *testing.T) {
},
})
}
func TestAccAWSSecurityGroup_importIPRangeAndSecurityGroupWithSameRules(t *testing.T) {
checkFn := func(s []*terraform.InstanceState) error {
// Expect 4: group, 3 rules
if len(s) != 4 {
return fmt.Errorf("expected 4 states: %#v", s)
}
return nil
}
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSecurityGroupConfig_importIPRangeAndSecurityGroupWithSameRules,
},
{
ResourceName: "aws_security_group.test_group_1",
ImportState: true,
ImportStateCheck: checkFn,
},
},
})
}
func TestAccAWSSecurityGroup_importIPRangesWithSameRules(t *testing.T) {
checkFn := func(s []*terraform.InstanceState) error {
// Expect 4: group, 2 rules
if len(s) != 3 {
return fmt.Errorf("expected 3 states: %#v", s)
}
return nil
}
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSSecurityGroupDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSSecurityGroupConfig_importIPRangesWithSameRules,
},
{
ResourceName: "aws_security_group.test_group_1",
ImportState: true,
ImportStateCheck: checkFn,
},
},
})
}

View File

@ -163,6 +163,7 @@ func Provider() terraform.ResourceProvider {
"aws_alb": dataSourceAwsAlb(),
"aws_alb_listener": dataSourceAwsAlbListener(),
"aws_ami": dataSourceAwsAmi(),
"aws_ami_ids": dataSourceAwsAmiIds(),
"aws_autoscaling_groups": dataSourceAwsAutoscalingGroups(),
"aws_availability_zone": dataSourceAwsAvailabilityZone(),
"aws_availability_zones": dataSourceAwsAvailabilityZones(),
@ -172,6 +173,7 @@ func Provider() terraform.ResourceProvider {
"aws_cloudformation_stack": dataSourceAwsCloudFormationStack(),
"aws_db_instance": dataSourceAwsDbInstance(),
"aws_ebs_snapshot": dataSourceAwsEbsSnapshot(),
"aws_ebs_snapshot_ids": dataSourceAwsEbsSnapshotIds(),
"aws_ebs_volume": dataSourceAwsEbsVolume(),
"aws_ecs_cluster": dataSourceAwsEcsCluster(),
"aws_ecs_container_definition": dataSourceAwsEcsContainerDefinition(),
@ -258,6 +260,7 @@ func Provider() terraform.ResourceProvider {
"aws_config_configuration_recorder": resourceAwsConfigConfigurationRecorder(),
"aws_config_configuration_recorder_status": resourceAwsConfigConfigurationRecorderStatus(),
"aws_config_delivery_channel": resourceAwsConfigDeliveryChannel(),
"aws_cognito_identity_pool": resourceAwsCognitoIdentityPool(),
"aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(),
"aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(),
"aws_codedeploy_app": resourceAwsCodeDeployApp(),
@ -365,6 +368,7 @@ func Provider() terraform.ResourceProvider {
"aws_default_route_table": resourceAwsDefaultRouteTable(),
"aws_network_acl_rule": resourceAwsNetworkAclRule(),
"aws_network_interface": resourceAwsNetworkInterface(),
"aws_network_interface_attachment": resourceAwsNetworkInterfaceAttachment(),
"aws_opsworks_application": resourceAwsOpsworksApplication(),
"aws_opsworks_stack": resourceAwsOpsworksStack(),
"aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(),

View File

@ -54,6 +54,16 @@ func resourceAwsApiGatewayDeployment() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
"invoke_url": {
Type: schema.TypeString,
Computed: true,
},
"execution_arn": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
@ -90,8 +100,9 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{
conn := meta.(*AWSClient).apigateway
log.Printf("[DEBUG] Reading API Gateway Deployment %s", d.Id())
restApiId := d.Get("rest_api_id").(string)
out, err := conn.GetDeployment(&apigateway.GetDeploymentInput{
RestApiId: aws.String(d.Get("rest_api_id").(string)),
RestApiId: aws.String(restApiId),
DeploymentId: aws.String(d.Id()),
})
if err != nil {
@ -104,6 +115,18 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{
log.Printf("[DEBUG] Received API Gateway Deployment: %s", out)
d.Set("description", out.Description)
region := meta.(*AWSClient).region
stageName := d.Get("stage_name").(string)
d.Set("invoke_url", buildApiGatewayInvokeURL(restApiId, region, stageName))
accountId := meta.(*AWSClient).accountid
arn, err := buildApiGatewayExecutionARN(restApiId, region, accountId)
if err != nil {
return err
}
d.Set("execution_arn", arn+"/"+stageName)
if err := d.Set("created_date", out.CreatedDate.Format(time.RFC3339)); err != nil {
log.Printf("[DEBUG] Error setting created_date: %s", err)
}

View File

@ -0,0 +1,238 @@
package aws
import (
"fmt"
"log"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsCognitoIdentityPool() *schema.Resource {
return &schema.Resource{
Create: resourceAwsCognitoIdentityPoolCreate,
Read: resourceAwsCognitoIdentityPoolRead,
Update: resourceAwsCognitoIdentityPoolUpdate,
Delete: resourceAwsCognitoIdentityPoolDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: map[string]*schema.Schema{
"identity_pool_name": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateCognitoIdentityPoolName,
},
"cognito_identity_providers": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"client_id": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateCognitoIdentityProvidersClientId,
},
"provider_name": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validateCognitoIdentityProvidersProviderName,
},
"server_side_token_check": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
},
},
},
"developer_provider_name": {
Type: schema.TypeString,
Optional: true,
ForceNew: true, // Forcing a new resource since it cannot be edited afterwards
ValidateFunc: validateCognitoProviderDeveloperName,
},
"allow_unauthenticated_identities": {
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"openid_connect_provider_arns": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
ValidateFunc: validateArn,
},
},
"saml_provider_arns": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
ValidateFunc: validateArn,
},
},
"supported_login_providers": {
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
ValidateFunc: validateCognitoSupportedLoginProviders,
},
},
},
}
}
func resourceAwsCognitoIdentityPoolCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Print("[DEBUG] Creating Cognito Identity Pool")
params := &cognitoidentity.CreateIdentityPoolInput{
IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)),
AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)),
}
if v, ok := d.GetOk("developer_provider_name"); ok {
params.DeveloperProviderName = aws.String(v.(string))
}
if v, ok := d.GetOk("supported_login_providers"); ok {
params.SupportedLoginProviders = expandCognitoSupportedLoginProviders(v.(map[string]interface{}))
}
if v, ok := d.GetOk("cognito_identity_providers"); ok {
params.CognitoIdentityProviders = expandCognitoIdentityProviders(v.(*schema.Set))
}
if v, ok := d.GetOk("saml_provider_arns"); ok {
params.SamlProviderARNs = expandStringList(v.([]interface{}))
}
if v, ok := d.GetOk("openid_connect_provider_arns"); ok {
params.OpenIdConnectProviderARNs = expandStringList(v.([]interface{}))
}
entity, err := conn.CreateIdentityPool(params)
if err != nil {
return fmt.Errorf("Error creating Cognito Identity Pool: %s", err)
}
d.SetId(*entity.IdentityPoolId)
return resourceAwsCognitoIdentityPoolRead(d, meta)
}
func resourceAwsCognitoIdentityPoolRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Printf("[DEBUG] Reading Cognito Identity Pool: %s", d.Id())
ip, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{
IdentityPoolId: aws.String(d.Id()),
})
if err != nil {
if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "ResourceNotFoundException" {
d.SetId("")
return nil
}
return err
}
d.Set("identity_pool_name", ip.IdentityPoolName)
d.Set("allow_unauthenticated_identities", ip.AllowUnauthenticatedIdentities)
d.Set("developer_provider_name", ip.DeveloperProviderName)
if ip.CognitoIdentityProviders != nil {
if err := d.Set("cognito_identity_providers", flattenCognitoIdentityProviders(ip.CognitoIdentityProviders)); err != nil {
return fmt.Errorf("[DEBUG] Error setting cognito_identity_providers error: %#v", err)
}
}
if ip.OpenIdConnectProviderARNs != nil {
if err := d.Set("openid_connect_provider_arns", flattenStringList(ip.OpenIdConnectProviderARNs)); err != nil {
return fmt.Errorf("[DEBUG] Error setting openid_connect_provider_arns error: %#v", err)
}
}
if ip.SamlProviderARNs != nil {
if err := d.Set("saml_provider_arns", flattenStringList(ip.SamlProviderARNs)); err != nil {
return fmt.Errorf("[DEBUG] Error setting saml_provider_arns error: %#v", err)
}
}
if ip.SupportedLoginProviders != nil {
if err := d.Set("supported_login_providers", flattenCognitoSupportedLoginProviders(ip.SupportedLoginProviders)); err != nil {
return fmt.Errorf("[DEBUG] Error setting supported_login_providers error: %#v", err)
}
}
return nil
}
func resourceAwsCognitoIdentityPoolUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Print("[DEBUG] Updating Cognito Identity Pool")
params := &cognitoidentity.IdentityPool{
IdentityPoolId: aws.String(d.Id()),
AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)),
IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)),
}
if d.HasChange("developer_provider_name") {
params.DeveloperProviderName = aws.String(d.Get("developer_provider_name").(string))
}
if d.HasChange("cognito_identity_providers") {
params.CognitoIdentityProviders = expandCognitoIdentityProviders(d.Get("cognito_identity_providers").(*schema.Set))
}
if d.HasChange("supported_login_providers") {
params.SupportedLoginProviders = expandCognitoSupportedLoginProviders(d.Get("supported_login_providers").(map[string]interface{}))
}
if d.HasChange("openid_connect_provider_arns") {
params.OpenIdConnectProviderARNs = expandStringList(d.Get("openid_connect_provider_arns").([]interface{}))
}
if d.HasChange("saml_provider_arns") {
params.SamlProviderARNs = expandStringList(d.Get("saml_provider_arns").([]interface{}))
}
_, err := conn.UpdateIdentityPool(params)
if err != nil {
return fmt.Errorf("Error creating Cognito Identity Pool: %s", err)
}
return resourceAwsCognitoIdentityPoolRead(d, meta)
}
func resourceAwsCognitoIdentityPoolDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).cognitoconn
log.Printf("[DEBUG] Deleting Cognito Identity Pool: %s", d.Id())
return resource.Retry(5*time.Minute, func() *resource.RetryError {
_, err := conn.DeleteIdentityPool(&cognitoidentity.DeleteIdentityPoolInput{
IdentityPoolId: aws.String(d.Id()),
})
if err == nil {
return nil
}
return resource.NonRetryableError(err)
})
}

View File

@ -0,0 +1,371 @@
package aws
import (
"errors"
"fmt"
"testing"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAWSCognitoIdentityPool_basic(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
updatedName := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "allow_unauthenticated_identities", "false"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(updatedName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", updatedName)),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_supportedLoginProviders(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_supportedLoginProviders(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.graph.facebook.com", "7346241598935555"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_supportedLoginProvidersModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.graph.facebook.com", "7346241598935552"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.accounts.google.com", "123456789012.apps.googleusercontent.com"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_openidConnectProviderArns(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArns(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "openid_connect_provider_arns.#", "1"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArnsModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "openid_connect_provider_arns.#", "2"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_samlProviderArns(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_samlProviderArns(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#", "1"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_samlProviderArnsModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#", "1"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckNoResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#"),
),
},
},
})
}
func TestAccAWSCognitoIdentityPool_cognitoIdentityProviders(t *testing.T) {
name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProviders(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.client_id", "7lhlkkfbfb4q5kpp90urffao"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.server_side_token_check", "false"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.client_id", "7lhlkkfbfb4q5kpp90urffao"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Ab129faBb"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.server_side_token_check", "false"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProvidersModified(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.client_id", "6lhlkkfbfb4q5kpp90urffae"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.server_side_token_check", "false"),
),
},
{
Config: testAccAWSCognitoIdentityPoolConfig_basic(name),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"),
resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)),
),
},
},
})
}
func testAccCheckAWSCognitoIdentityPoolExists(n string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return errors.New("No Cognito Identity Pool ID is set")
}
conn := testAccProvider.Meta().(*AWSClient).cognitoconn
_, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{
IdentityPoolId: aws.String(rs.Primary.ID),
})
if err != nil {
return err
}
return nil
}
}
func testAccCheckAWSCognitoIdentityPoolDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).cognitoconn
for _, rs := range s.RootModule().Resources {
if rs.Type != "aws_cognito_identity_pool" {
continue
}
_, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{
IdentityPoolId: aws.String(rs.Primary.ID),
})
if err != nil {
if wserr, ok := err.(awserr.Error); ok && wserr.Code() == "ResourceNotFoundException" {
return nil
}
return err
}
}
return nil
}
func testAccAWSCognitoIdentityPoolConfig_basic(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
developer_provider_name = "my.developer"
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_supportedLoginProviders(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
supported_login_providers {
"graph.facebook.com" = "7346241598935555"
}
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_supportedLoginProvidersModified(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
supported_login_providers {
"graph.facebook.com" = "7346241598935552"
"accounts.google.com" = "123456789012.apps.googleusercontent.com"
}
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArns(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/server.example.com"]
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArnsModified(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/foo.example.com", "arn:aws:iam::123456789012:oidc-provider/bar.example.com"]
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_samlProviderArns(name string) string {
return fmt.Sprintf(`
resource "aws_iam_saml_provider" "default" {
name = "myprovider-%s"
saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}"
}
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
saml_provider_arns = ["${aws_iam_saml_provider.default.arn}"]
}
`, name, name)
}
func testAccAWSCognitoIdentityPoolConfig_samlProviderArnsModified(name string) string {
return fmt.Sprintf(`
resource "aws_iam_saml_provider" "default" {
name = "default-%s"
saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}"
}
resource "aws_iam_saml_provider" "secondary" {
name = "secondary-%s"
saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}"
}
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
saml_provider_arns = ["${aws_iam_saml_provider.secondary.arn}"]
}
`, name, name, name)
}
func testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProviders(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = "7lhlkkfbfb4q5kpp90urffao"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Ab129faBb"
server_side_token_check = false
}
cognito_identity_providers {
client_id = "7lhlkkfbfb4q5kpp90urffao"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"
server_side_token_check = false
}
}
`, name)
}
func testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProvidersModified(name string) string {
return fmt.Sprintf(`
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "identity pool %s"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = "6lhlkkfbfb4q5kpp90urffae"
provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"
server_side_token_check = false
}
}
`, name)
}

View File

@ -81,7 +81,7 @@ func TestAccAWSIAMInstanceProfile_missingRoleThrowsError(t *testing.T) {
Steps: []resource.TestStep{
{
Config: testAccAwsIamInstanceProfileConfigMissingRole(rName),
ExpectError: regexp.MustCompile("Either `roles` or `role` must be specified when creating an IAM Instance Profile"),
ExpectError: regexp.MustCompile(regexp.QuoteMeta("Either `role` or `roles` (deprecated) must be specified when creating an IAM Instance Profile")),
},
},
})

View File

@ -146,6 +146,10 @@ func resourceAwsLambdaFunction() *schema.Resource {
Type: schema.TypeString,
Computed: true,
},
"invoke_arn": {
Type: schema.TypeString,
Computed: true,
},
"last_modified": {
Type: schema.TypeString,
Computed: true,
@ -175,6 +179,8 @@ func resourceAwsLambdaFunction() *schema.Resource {
Optional: true,
ValidateFunc: validateArn,
},
"tags": tagsSchema(),
},
}
}
@ -291,6 +297,10 @@ func resourceAwsLambdaFunctionCreate(d *schema.ResourceData, meta interface{}) e
params.KMSKeyArn = aws.String(v.(string))
}
if v, exists := d.GetOk("tags"); exists {
params.Tags = tagsFromMapGeneric(v.(map[string]interface{}))
}
// IAM profiles can take ~10 seconds to propagate in AWS:
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#launch-instance-with-role-console
// Error creating Lambda function: InvalidParameterValueException: The role defined for the task cannot be assumed by Lambda.
@ -353,6 +363,7 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err
d.Set("runtime", function.Runtime)
d.Set("timeout", function.Timeout)
d.Set("kms_key_arn", function.KMSKeyArn)
d.Set("tags", tagsToMapGeneric(getFunctionOutput.Tags))
config := flattenLambdaVpcConfigResponse(function.VpcConfig)
log.Printf("[INFO] Setting Lambda %s VPC config %#v from API", d.Id(), config)
@ -399,6 +410,8 @@ func resourceAwsLambdaFunctionRead(d *schema.ResourceData, meta interface{}) err
d.Set("version", lastVersion)
d.Set("qualified_arn", lastQualifiedArn)
d.Set("invoke_arn", buildLambdaInvokeArn(*function.FunctionArn, meta.(*AWSClient).region))
return nil
}
@ -448,6 +461,12 @@ func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) e
d.Partial(true)
arn := d.Get("arn").(string)
if tagErr := setTagsLambda(conn, d, arn); tagErr != nil {
return tagErr
}
d.SetPartial("tags")
if d.HasChange("filename") || d.HasChange("source_code_hash") || d.HasChange("s3_bucket") || d.HasChange("s3_key") || d.HasChange("s3_object_version") {
codeReq := &lambda.UpdateFunctionCodeInput{
FunctionName: aws.String(d.Id()),

View File

@ -582,6 +582,74 @@ func TestAccAWSLambdaFunction_runtimeValidation_java8(t *testing.T) {
})
}
func TestAccAWSLambdaFunction_tags(t *testing.T) {
var conf lambda.GetFunctionOutput
rSt := acctest.RandString(5)
rName := fmt.Sprintf("tf_test_%s", rSt)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckLambdaFunctionDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSLambdaConfigBasic(rName, rSt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_test", rName, &conf),
testAccCheckAwsLambdaFunctionName(&conf, rName),
testAccCheckAwsLambdaFunctionArnHasSuffix(&conf, ":"+rName),
resource.TestCheckNoResourceAttr("aws_lambda_function.lambda_function_test", "tags"),
),
},
{
Config: testAccAWSLambdaConfigTags(rName, rSt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_test", rName, &conf),
testAccCheckAwsLambdaFunctionName(&conf, rName),
testAccCheckAwsLambdaFunctionArnHasSuffix(&conf, ":"+rName),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.%", "2"),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.Key1", "Value One"),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.Description", "Very interesting"),
),
},
{
Config: testAccAWSLambdaConfigTagsModified(rName, rSt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_test", rName, &conf),
testAccCheckAwsLambdaFunctionName(&conf, rName),
testAccCheckAwsLambdaFunctionArnHasSuffix(&conf, ":"+rName),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.%", "3"),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.Key1", "Value One Changed"),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.Key2", "Value Two"),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "tags.Key3", "Value Three"),
),
},
},
})
}
func TestAccAWSLambdaFunction_runtimeValidation_python36(t *testing.T) {
var conf lambda.GetFunctionOutput
rSt := acctest.RandString(5)
rName := fmt.Sprintf("tf_test_%s", rSt)
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckLambdaFunctionDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSLambdaConfigPython36Runtime(rName, rSt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAwsLambdaFunctionExists("aws_lambda_function.lambda_function_test", rName, &conf),
resource.TestCheckResourceAttr("aws_lambda_function.lambda_function_test", "runtime", lambda.RuntimePython36),
),
},
},
})
}
func testAccCheckLambdaFunctionDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).lambdaconn
@ -1106,6 +1174,51 @@ resource "aws_lambda_function" "lambda_function_test" {
`, rName)
}
func testAccAWSLambdaConfigTags(rName, rSt string) string {
return fmt.Sprintf(baseAccAWSLambdaConfig(rSt)+`
resource "aws_lambda_function" "lambda_function_test" {
filename = "test-fixtures/lambdatest.zip"
function_name = "%s"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.example"
runtime = "nodejs4.3"
tags {
Key1 = "Value One"
Description = "Very interesting"
}
}
`, rName)
}
func testAccAWSLambdaConfigTagsModified(rName, rSt string) string {
return fmt.Sprintf(baseAccAWSLambdaConfig(rSt)+`
resource "aws_lambda_function" "lambda_function_test" {
filename = "test-fixtures/lambdatest.zip"
function_name = "%s"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.example"
runtime = "nodejs4.3"
tags {
Key1 = "Value One Changed"
Key2 = "Value Two"
Key3 = "Value Three"
}
}
`, rName)
}
func testAccAWSLambdaConfigPython36Runtime(rName, rSt string) string {
return fmt.Sprintf(baseAccAWSLambdaConfig(rSt)+`
resource "aws_lambda_function" "lambda_function_test" {
filename = "test-fixtures/lambdatest.zip"
function_name = "%s"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.example"
runtime = "python3.6"
}
`, rName)
}
const testAccAWSLambdaFunctionConfig_local_tpl = `
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda_%d"

View File

@ -230,7 +230,12 @@ func resourceAwsLambdaPermissionRead(d *schema.ResourceData, meta interface{}) e
}
d.Set("action", statement.Action)
d.Set("principal", statement.Principal["Service"])
// Check if the pricipal is a cross-account IAM role
if _, ok := statement.Principal["AWS"]; ok {
d.Set("principal", statement.Principal["AWS"])
} else {
d.Set("principal", statement.Principal["Service"])
}
if stringEquals, ok := statement.Condition["StringEquals"]; ok {
d.Set("source_account", stringEquals["AWS:SourceAccount"])

View File

@ -332,6 +332,30 @@ func TestAccAWSLambdaPermission_withSNS(t *testing.T) {
})
}
func TestAccAWSLambdaPermission_withIAMRole(t *testing.T) {
var statement LambdaPolicyStatement
endsWithFuncName := regexp.MustCompile(":function:lambda_function_name_perm_iamrole$")
endsWithRoleName := regexp.MustCompile("/iam_for_lambda_perm_iamrole$")
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSLambdaPermissionDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSLambdaPermissionConfig_withIAMRole,
Check: resource.ComposeTestCheckFunc(
testAccCheckLambdaPermissionExists("aws_lambda_permission.iam_role", &statement),
resource.TestCheckResourceAttr("aws_lambda_permission.iam_role", "action", "lambda:InvokeFunction"),
resource.TestMatchResourceAttr("aws_lambda_permission.iam_role", "principal", endsWithRoleName),
resource.TestCheckResourceAttr("aws_lambda_permission.iam_role", "statement_id", "AllowExecutionFromIAMRole"),
resource.TestMatchResourceAttr("aws_lambda_permission.iam_role", "function_name", endsWithFuncName),
),
},
},
})
}
func testAccCheckLambdaPermissionExists(n string, statement *LambdaPolicyStatement) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
@ -724,6 +748,42 @@ EOF
}
`
var testAccAWSLambdaPermissionConfig_withIAMRole = `
resource "aws_lambda_permission" "iam_role" {
statement_id = "AllowExecutionFromIAMRole"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.my-func.arn}"
principal = "${aws_iam_role.police.arn}"
}
resource "aws_lambda_function" "my-func" {
filename = "test-fixtures/lambdatest.zip"
function_name = "lambda_function_name_perm_iamrole"
role = "${aws_iam_role.police.arn}"
handler = "exports.handler"
runtime = "nodejs4.3"
}
resource "aws_iam_role" "police" {
name = "iam_for_lambda_perm_iamrole"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
`
var testLambdaPolicy = []byte(`{
"Version": "2012-10-17",
"Statement": [

View File

@ -4,6 +4,7 @@ import (
"bytes"
"fmt"
"log"
"math"
"strconv"
"time"
@ -33,6 +34,12 @@ func resourceAwsNetworkInterface() *schema.Resource {
ForceNew: true,
},
"private_ip": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"private_ips": &schema.Schema{
Type: schema.TypeSet,
Optional: true,
@ -41,6 +48,12 @@ func resourceAwsNetworkInterface() *schema.Resource {
Set: schema.HashString,
},
"private_ips_count": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"security_groups": &schema.Schema{
Type: schema.TypeSet,
Optional: true,
@ -110,6 +123,10 @@ func resourceAwsNetworkInterfaceCreate(d *schema.ResourceData, meta interface{})
request.Description = aws.String(v.(string))
}
if v, ok := d.GetOk("private_ips_count"); ok {
request.SecondaryPrivateIpAddressCount = aws.Int64(int64(v.(int)))
}
log.Printf("[DEBUG] Creating network interface")
resp, err := conn.CreateNetworkInterface(request)
if err != nil {
@ -144,6 +161,7 @@ func resourceAwsNetworkInterfaceRead(d *schema.ResourceData, meta interface{}) e
eni := describeResp.NetworkInterfaces[0]
d.Set("subnet_id", eni.SubnetId)
d.Set("private_ip", eni.PrivateIpAddress)
d.Set("private_ips", flattenNetworkInterfacesPrivateIPAddresses(eni.PrivateIpAddresses))
d.Set("security_groups", flattenGroupIdentifiers(eni.Groups))
d.Set("source_dest_check", eni.SourceDestCheck)
@ -300,6 +318,49 @@ func resourceAwsNetworkInterfaceUpdate(d *schema.ResourceData, meta interface{})
d.SetPartial("source_dest_check")
if d.HasChange("private_ips_count") {
o, n := d.GetChange("private_ips_count")
private_ips := d.Get("private_ips").(*schema.Set).List()
private_ips_filtered := private_ips[:0]
primary_ip := d.Get("private_ip")
for _, ip := range private_ips {
if ip != primary_ip {
private_ips_filtered = append(private_ips_filtered, ip)
}
}
if o != nil && o != 0 && n != nil && n != len(private_ips_filtered) {
diff := n.(int) - o.(int)
// Surplus of IPs, add the diff
if diff > 0 {
input := &ec2.AssignPrivateIpAddressesInput{
NetworkInterfaceId: aws.String(d.Id()),
SecondaryPrivateIpAddressCount: aws.Int64(int64(diff)),
}
_, err := conn.AssignPrivateIpAddresses(input)
if err != nil {
return fmt.Errorf("Failure to assign Private IPs: %s", err)
}
}
if diff < 0 {
input := &ec2.UnassignPrivateIpAddressesInput{
NetworkInterfaceId: aws.String(d.Id()),
PrivateIpAddresses: expandStringList(private_ips_filtered[0:int(math.Abs(float64(diff)))]),
}
_, err := conn.UnassignPrivateIpAddresses(input)
if err != nil {
return fmt.Errorf("Failure to unassign Private IPs: %s", err)
}
}
d.SetPartial("private_ips_count")
}
}
if d.HasChange("security_groups") {
request := &ec2.ModifyNetworkInterfaceAttributeInput{
NetworkInterfaceId: aws.String(d.Id()),

View File

@ -0,0 +1,166 @@
package aws
import (
"fmt"
"log"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceAwsNetworkInterfaceAttachment() *schema.Resource {
return &schema.Resource{
Create: resourceAwsNetworkInterfaceAttachmentCreate,
Read: resourceAwsNetworkInterfaceAttachmentRead,
Delete: resourceAwsNetworkInterfaceAttachmentDelete,
Schema: map[string]*schema.Schema{
"device_index": {
Type: schema.TypeInt,
Required: true,
ForceNew: true,
},
"instance_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"network_interface_id": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"attachment_id": {
Type: schema.TypeString,
Computed: true,
},
"status": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
func resourceAwsNetworkInterfaceAttachmentCreate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
device_index := d.Get("device_index").(int)
instance_id := d.Get("instance_id").(string)
network_interface_id := d.Get("network_interface_id").(string)
opts := &ec2.AttachNetworkInterfaceInput{
DeviceIndex: aws.Int64(int64(device_index)),
InstanceId: aws.String(instance_id),
NetworkInterfaceId: aws.String(network_interface_id),
}
log.Printf("[DEBUG] Attaching network interface (%s) to instance (%s)", network_interface_id, instance_id)
resp, err := conn.AttachNetworkInterface(opts)
if err != nil {
if awsErr, ok := err.(awserr.Error); ok {
return fmt.Errorf("Error attaching network interface (%s) to instance (%s), message: \"%s\", code: \"%s\"",
network_interface_id, instance_id, awsErr.Message(), awsErr.Code())
}
return err
}
stateConf := &resource.StateChangeConf{
Pending: []string{"false"},
Target: []string{"true"},
Refresh: networkInterfaceAttachmentRefreshFunc(conn, network_interface_id),
Timeout: 5 * time.Minute,
Delay: 10 * time.Second,
MinTimeout: 3 * time.Second,
}
_, err = stateConf.WaitForState()
if err != nil {
return fmt.Errorf(
"Error waiting for Volume (%s) to attach to Instance: %s, error: %s", network_interface_id, instance_id, err)
}
d.SetId(*resp.AttachmentId)
return resourceAwsNetworkInterfaceAttachmentRead(d, meta)
}
func resourceAwsNetworkInterfaceAttachmentRead(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
interfaceId := d.Get("network_interface_id").(string)
req := &ec2.DescribeNetworkInterfacesInput{
NetworkInterfaceIds: []*string{aws.String(interfaceId)},
}
resp, err := conn.DescribeNetworkInterfaces(req)
if err != nil {
if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidNetworkInterfaceID.NotFound" {
// The ENI is gone now, so just remove the attachment from the state
d.SetId("")
return nil
}
return fmt.Errorf("Error retrieving ENI: %s", err)
}
if len(resp.NetworkInterfaces) != 1 {
return fmt.Errorf("Unable to find ENI (%s): %#v", interfaceId, resp.NetworkInterfaces)
}
eni := resp.NetworkInterfaces[0]
if eni.Attachment == nil {
// Interface is no longer attached, remove from state
d.SetId("")
return nil
}
d.Set("attachment_id", eni.Attachment.AttachmentId)
d.Set("device_index", eni.Attachment.DeviceIndex)
d.Set("instance_id", eni.Attachment.InstanceId)
d.Set("network_interface_id", eni.NetworkInterfaceId)
d.Set("status", eni.Attachment.Status)
return nil
}
func resourceAwsNetworkInterfaceAttachmentDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).ec2conn
interfaceId := d.Get("network_interface_id").(string)
detach_request := &ec2.DetachNetworkInterfaceInput{
AttachmentId: aws.String(d.Id()),
Force: aws.Bool(true),
}
_, detach_err := conn.DetachNetworkInterface(detach_request)
if detach_err != nil {
if awsErr, _ := detach_err.(awserr.Error); awsErr.Code() != "InvalidAttachmentID.NotFound" {
return fmt.Errorf("Error detaching ENI: %s", detach_err)
}
}
log.Printf("[DEBUG] Waiting for ENI (%s) to become dettached", interfaceId)
stateConf := &resource.StateChangeConf{
Pending: []string{"true"},
Target: []string{"false"},
Refresh: networkInterfaceAttachmentRefreshFunc(conn, interfaceId),
Timeout: 10 * time.Minute,
}
if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf(
"Error waiting for ENI (%s) to become dettached: %s", interfaceId, err)
}
return nil
}

View File

@ -0,0 +1,92 @@
package aws
import (
"fmt"
"testing"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccAWSNetworkInterfaceAttachment_basic(t *testing.T) {
var conf ec2.NetworkInterface
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
IDRefreshName: "aws_network_interface.bar",
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSENIDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSNetworkInterfaceAttachmentConfig_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSENIExists("aws_network_interface.bar", &conf),
resource.TestCheckResourceAttr(
"aws_network_interface_attachment.test", "device_index", "1"),
resource.TestCheckResourceAttrSet(
"aws_network_interface_attachment.test", "instance_id"),
resource.TestCheckResourceAttrSet(
"aws_network_interface_attachment.test", "network_interface_id"),
resource.TestCheckResourceAttrSet(
"aws_network_interface_attachment.test", "attachment_id"),
resource.TestCheckResourceAttrSet(
"aws_network_interface_attachment.test", "status"),
),
},
},
})
}
func testAccAWSNetworkInterfaceAttachmentConfig_basic(rInt int) string {
return fmt.Sprintf(`
resource "aws_vpc" "foo" {
cidr_block = "172.16.0.0/16"
}
resource "aws_subnet" "foo" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "172.16.10.0/24"
availability_zone = "us-west-2a"
}
resource "aws_security_group" "foo" {
vpc_id = "${aws_vpc.foo.id}"
description = "foo"
name = "foo-%d"
egress {
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
}
resource "aws_network_interface" "bar" {
subnet_id = "${aws_subnet.foo.id}"
private_ips = ["172.16.10.100"]
security_groups = ["${aws_security_group.foo.id}"]
description = "Managed by Terraform"
tags {
Name = "bar_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-c5eabbf5"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.foo.id}"
tags {
Name = "foo-%d"
}
}
resource "aws_network_interface_attachment" "test" {
device_index = 1
instance_id = "${aws_instance.foo.id}"
network_interface_id = "${aws_network_interface.bar.id}"
}
`, rInt, rInt)
}

View File

@ -111,6 +111,7 @@ func resourceAwsOpsworksInstance() *schema.Resource {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},
"infrastructure_class": {

View File

@ -108,6 +108,44 @@ func TestAccAWSOpsworksInstance(t *testing.T) {
})
}
func TestAccAWSOpsworksInstance_UpdateHostNameForceNew(t *testing.T) {
stackName := fmt.Sprintf("tf-%d", acctest.RandInt())
var before, after opsworks.Instance
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAwsOpsworksInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccAwsOpsworksInstanceConfigCreate(stackName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSOpsworksInstanceExists("aws_opsworks_instance.tf-acc", &before),
resource.TestCheckResourceAttr("aws_opsworks_instance.tf-acc", "hostname", "tf-acc1"),
),
},
{
Config: testAccAwsOpsworksInstanceConfigUpdateHostName(stackName),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSOpsworksInstanceExists("aws_opsworks_instance.tf-acc", &after),
resource.TestCheckResourceAttr("aws_opsworks_instance.tf-acc", "hostname", "tf-acc2"),
testAccCheckAwsOpsworksInstanceRecreated(t, &before, &after),
),
},
},
})
}
func testAccCheckAwsOpsworksInstanceRecreated(t *testing.T,
before, after *opsworks.Instance) resource.TestCheckFunc {
return func(s *terraform.State) error {
if *before.InstanceId == *after.InstanceId {
t.Fatalf("Expected change of OpsWorks Instance IDs, but both were %s", *before.InstanceId)
}
return nil
}
}
func testAccCheckAWSOpsworksInstanceExists(
n string, opsinst *opsworks.Instance) resource.TestCheckFunc {
return func(s *terraform.State) error {
@ -197,6 +235,59 @@ func testAccCheckAwsOpsworksInstanceDestroy(s *terraform.State) error {
return fmt.Errorf("Fall through error on OpsWorks instance test")
}
func testAccAwsOpsworksInstanceConfigUpdateHostName(name string) string {
return fmt.Sprintf(`
resource "aws_security_group" "tf-ops-acc-web" {
name = "%s-web"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "tf-ops-acc-php" {
name = "%s-php"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_opsworks_static_web_layer" "tf-acc" {
stack_id = "${aws_opsworks_stack.tf-acc.id}"
custom_security_group_ids = [
"${aws_security_group.tf-ops-acc-web.id}",
]
}
resource "aws_opsworks_php_app_layer" "tf-acc" {
stack_id = "${aws_opsworks_stack.tf-acc.id}"
custom_security_group_ids = [
"${aws_security_group.tf-ops-acc-php.id}",
]
}
resource "aws_opsworks_instance" "tf-acc" {
stack_id = "${aws_opsworks_stack.tf-acc.id}"
layer_ids = [
"${aws_opsworks_static_web_layer.tf-acc.id}",
]
instance_type = "t2.micro"
state = "stopped"
hostname = "tf-acc2"
}
%s
`, name, name, testAccAwsOpsworksStackConfigVpcCreate(name))
}
func testAccAwsOpsworksInstanceConfigCreate(name string) string {
return fmt.Sprintf(`
resource "aws_security_group" "tf-ops-acc-web" {

View File

@ -300,7 +300,7 @@ func deleteAllRecordsInHostedZoneId(hostedZoneId, hostedZoneName string, conn *r
changes := make([]*route53.Change, 0)
// 100 items per page returned by default
for _, set := range sets {
if *set.Name == hostedZoneName+"." && (*set.Type == "NS" || *set.Type == "SOA") {
if strings.TrimSuffix(*set.Name, ".") == strings.TrimSuffix(hostedZoneName, ".") && (*set.Type == "NS" || *set.Type == "SOA") {
// Zone NS & SOA records cannot be deleted
continue
}

View File

@ -89,7 +89,7 @@ func TestAccAWSRoute53Zone_basic(t *testing.T) {
}
func TestAccAWSRoute53Zone_forceDestroy(t *testing.T) {
var zone route53.GetHostedZoneOutput
var zone, zoneWithDot route53.GetHostedZoneOutput
// record the initialized providers so that we can use them to
// check for the instances in each region
@ -115,6 +115,11 @@ func TestAccAWSRoute53Zone_forceDestroy(t *testing.T) {
// Add >100 records to verify pagination works ok
testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zone, 100),
testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zone, 5),
testAccCheckRoute53ZoneExistsWithProviders("aws_route53_zone.with_trailing_dot", &zoneWithDot, &providers),
// Add >100 records to verify pagination works ok
testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zoneWithDot, 100),
testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zoneWithDot, 5),
),
},
},
@ -417,6 +422,11 @@ resource "aws_route53_zone" "destroyable" {
name = "terraform.io"
force_destroy = true
}
resource "aws_route53_zone" "with_trailing_dot" {
name = "hashicorptest.io."
force_destroy = true
}
`
const testAccRoute53ZoneConfigUpdateComment = `

View File

@ -1995,6 +1995,91 @@ resource "aws_security_group_rule" "allow_test_group_3" {
}
`
const testAccAWSSecurityGroupConfig_importIPRangeAndSecurityGroupWithSameRules = `
resource "aws_vpc" "foo" {
cidr_block = "10.1.0.0/16"
tags {
Name = "tf_sg_import_test"
}
}
resource "aws_security_group" "test_group_1" {
name = "test group 1"
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_security_group" "test_group_2" {
name = "test group 2"
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_security_group_rule" "allow_security_group" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "tcp"
source_security_group_id = "${aws_security_group.test_group_2.id}"
security_group_id = "${aws_security_group.test_group_1.id}"
}
resource "aws_security_group_rule" "allow_cidr_block" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = ["10.0.0.0/32"]
security_group_id = "${aws_security_group.test_group_1.id}"
}
resource "aws_security_group_rule" "allow_ipv6_cidr_block" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
security_group_id = "${aws_security_group.test_group_1.id}"
}
`
const testAccAWSSecurityGroupConfig_importIPRangesWithSameRules = `
resource "aws_vpc" "foo" {
cidr_block = "10.1.0.0/16"
tags {
Name = "tf_sg_import_test"
}
}
resource "aws_security_group" "test_group_1" {
name = "test group 1"
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_security_group_rule" "allow_cidr_block" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = ["10.0.0.0/32"]
security_group_id = "${aws_security_group.test_group_1.id}"
}
resource "aws_security_group_rule" "allow_ipv6_cidr_block" {
type = "ingress"
from_port = 0
to_port = 0
protocol = "tcp"
ipv6_cidr_blocks = ["::/0"]
security_group_id = "${aws_security_group.test_group_1.id}"
}
`
const testAccAWSSecurityGroupConfigPrefixListEgress = `
resource "aws_vpc" "tf_sg_prefix_list_egress_test" {
cidr_block = "10.0.0.0/16"

View File

@ -22,6 +22,9 @@ func resourceAwsSubnet() *schema.Resource {
State: schema.ImportStatePassthrough,
},
SchemaVersion: 1,
MigrateState: resourceAwsSubnetMigrateState,
Schema: map[string]*schema.Schema{
"vpc_id": {
Type: schema.TypeString,
@ -38,7 +41,6 @@ func resourceAwsSubnet() *schema.Resource {
"ipv6_cidr_block": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
},
"availability_zone": {
@ -141,9 +143,15 @@ func resourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error {
d.Set("cidr_block", subnet.CidrBlock)
d.Set("map_public_ip_on_launch", subnet.MapPublicIpOnLaunch)
d.Set("assign_ipv6_address_on_creation", subnet.AssignIpv6AddressOnCreation)
if subnet.Ipv6CidrBlockAssociationSet != nil {
d.Set("ipv6_cidr_block", subnet.Ipv6CidrBlockAssociationSet[0].Ipv6CidrBlock)
d.Set("ipv6_cidr_block_association_id", subnet.Ipv6CidrBlockAssociationSet[0].AssociationId)
for _, a := range subnet.Ipv6CidrBlockAssociationSet {
if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once
d.Set("ipv6_cidr_block_association_id", a.AssociationId)
d.Set("ipv6_cidr_block", a.Ipv6CidrBlock)
break
} else {
d.Set("ipv6_cidr_block_association_id", "") // we blank these out to remove old entries
d.Set("ipv6_cidr_block", "")
}
}
d.Set("tags", tagsToMap(subnet.Tags))
@ -199,6 +207,73 @@ func resourceAwsSubnetUpdate(d *schema.ResourceData, meta interface{}) error {
}
}
// We have to be careful here to not go through a change of association if this is a new resource
// A New resource here would denote that the Update func is called by the Create func
if d.HasChange("ipv6_cidr_block") && !d.IsNewResource() {
// We need to handle that we disassociate the IPv6 CIDR block before we try and associate the new one
// This could be an issue as, we could error out when we try and add the new one
// We may need to roll back the state and reattach the old one if this is the case
_, new := d.GetChange("ipv6_cidr_block")
//Firstly we have to disassociate the old IPv6 CIDR Block
disassociateOps := &ec2.DisassociateSubnetCidrBlockInput{
AssociationId: aws.String(d.Get("ipv6_cidr_block_association_id").(string)),
}
_, err := conn.DisassociateSubnetCidrBlock(disassociateOps)
if err != nil {
return err
}
// Wait for the CIDR to become disassociated
log.Printf(
"[DEBUG] Waiting for IPv6 CIDR (%s) to become disassociated",
d.Id())
stateConf := &resource.StateChangeConf{
Pending: []string{"disassociating", "associated"},
Target: []string{"disassociated"},
Refresh: SubnetIpv6CidrStateRefreshFunc(conn, d.Id(), d.Get("ipv6_cidr_block_association_id").(string)),
Timeout: 1 * time.Minute,
}
if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf(
"Error waiting for IPv6 CIDR (%s) to become disassociated: %s",
d.Id(), err)
}
//Now we need to try and associate the new CIDR block
associatesOpts := &ec2.AssociateSubnetCidrBlockInput{
SubnetId: aws.String(d.Id()),
Ipv6CidrBlock: aws.String(new.(string)),
}
resp, err := conn.AssociateSubnetCidrBlock(associatesOpts)
if err != nil {
//The big question here is, do we want to try and reassociate the old one??
//If we have a failure here, then we may be in a situation that we have nothing associated
return err
}
// Wait for the CIDR to become associated
log.Printf(
"[DEBUG] Waiting for IPv6 CIDR (%s) to become associated",
d.Id())
stateConf = &resource.StateChangeConf{
Pending: []string{"associating", "disassociated"},
Target: []string{"associated"},
Refresh: SubnetIpv6CidrStateRefreshFunc(conn, d.Id(), *resp.Ipv6CidrBlockAssociation.AssociationId),
Timeout: 1 * time.Minute,
}
if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf(
"Error waiting for IPv6 CIDR (%s) to become associated: %s",
d.Id(), err)
}
d.SetPartial("ipv6_cidr_block")
}
d.Partial(false)
return resourceAwsSubnetRead(d, meta)
@ -271,3 +346,38 @@ func SubnetStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc
return subnet, *subnet.State, nil
}
}
func SubnetIpv6CidrStateRefreshFunc(conn *ec2.EC2, id string, associationId string) resource.StateRefreshFunc {
return func() (interface{}, string, error) {
opts := &ec2.DescribeSubnetsInput{
SubnetIds: []*string{aws.String(id)},
}
resp, err := conn.DescribeSubnets(opts)
if err != nil {
if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidSubnetID.NotFound" {
resp = nil
} else {
log.Printf("Error on SubnetIpv6CidrStateRefreshFunc: %s", err)
return nil, "", err
}
}
if resp == nil {
// Sometimes AWS just has consistency issues and doesn't see
// our instance yet. Return an empty state.
return nil, "", nil
}
if resp.Subnets[0].Ipv6CidrBlockAssociationSet == nil {
return nil, "", nil
}
for _, association := range resp.Subnets[0].Ipv6CidrBlockAssociationSet {
if *association.AssociationId == associationId {
return association, *association.Ipv6CidrBlockState.State, nil
}
}
return nil, "", nil
}
}

View File

@ -0,0 +1,33 @@
package aws
import (
"fmt"
"log"
"github.com/hashicorp/terraform/terraform"
)
func resourceAwsSubnetMigrateState(
v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) {
switch v {
case 0:
log.Println("[INFO] Found AWS Subnet State v0; migrating to v1")
return migrateSubnetStateV0toV1(is)
default:
return is, fmt.Errorf("Unexpected schema version: %d", v)
}
}
func migrateSubnetStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) {
if is.Empty() || is.Attributes == nil {
log.Println("[DEBUG] Empty Subnet State; nothing to migrate.")
return is, nil
}
log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes)
is.Attributes["assign_ipv6_address_on_creation"] = "false"
log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes)
return is, nil
}

View File

@ -0,0 +1,41 @@
package aws
import (
"testing"
"github.com/hashicorp/terraform/terraform"
)
func TestAWSSubnetMigrateState(t *testing.T) {
cases := map[string]struct {
StateVersion int
ID string
Attributes map[string]string
Expected string
Meta interface{}
}{
"v0_1_without_value": {
StateVersion: 0,
ID: "some_id",
Attributes: map[string]string{},
Expected: "false",
},
}
for tn, tc := range cases {
is := &terraform.InstanceState{
ID: tc.ID,
Attributes: tc.Attributes,
}
is, err := resourceAwsSubnetMigrateState(
tc.StateVersion, is, tc.Meta)
if err != nil {
t.Fatalf("bad: %s, err: %#v", tn, err)
}
if is.Attributes["assign_ipv6_address_on_creation"] != tc.Expected {
t.Fatalf("bad Subnet Migrate: %s\n\n expected: %s", is.Attributes["assign_ipv6_address_on_creation"], tc.Expected)
}
}
}

View File

@ -45,27 +45,7 @@ func TestAccAWSSubnet_basic(t *testing.T) {
}
func TestAccAWSSubnet_ipv6(t *testing.T) {
var v ec2.Subnet
testCheck := func(*terraform.State) error {
if v.Ipv6CidrBlockAssociationSet == nil {
return fmt.Errorf("Expected IPV6 CIDR Block Association")
}
if *v.AssignIpv6AddressOnCreation != true {
return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *v.AssignIpv6AddressOnCreation)
}
return nil
}
testCheckUpdated := func(*terraform.State) error {
if *v.AssignIpv6AddressOnCreation != false {
return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *v.AssignIpv6AddressOnCreation)
}
return nil
}
var before, after ec2.Subnet
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -77,22 +57,65 @@ func TestAccAWSSubnet_ipv6(t *testing.T) {
Config: testAccSubnetConfigIpv6,
Check: resource.ComposeTestCheckFunc(
testAccCheckSubnetExists(
"aws_subnet.foo", &v),
testCheck,
"aws_subnet.foo", &before),
testAccCheckAwsSubnetIpv6BeforeUpdate(t, &before),
),
},
{
Config: testAccSubnetConfigIpv6Updated,
Config: testAccSubnetConfigIpv6UpdateAssignIpv6OnCreation,
Check: resource.ComposeTestCheckFunc(
testAccCheckSubnetExists(
"aws_subnet.foo", &v),
testCheckUpdated,
"aws_subnet.foo", &after),
testAccCheckAwsSubnetIpv6AfterUpdate(t, &after),
),
},
{
Config: testAccSubnetConfigIpv6UpdateIpv6Cidr,
Check: resource.ComposeTestCheckFunc(
testAccCheckSubnetExists(
"aws_subnet.foo", &after),
testAccCheckAwsSubnetNotRecreated(t, &before, &after),
),
},
},
})
}
func testAccCheckAwsSubnetIpv6BeforeUpdate(t *testing.T, subnet *ec2.Subnet) resource.TestCheckFunc {
return func(s *terraform.State) error {
if subnet.Ipv6CidrBlockAssociationSet == nil {
return fmt.Errorf("Expected IPV6 CIDR Block Association")
}
if *subnet.AssignIpv6AddressOnCreation != true {
return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *subnet.AssignIpv6AddressOnCreation)
}
return nil
}
}
func testAccCheckAwsSubnetIpv6AfterUpdate(t *testing.T, subnet *ec2.Subnet) resource.TestCheckFunc {
return func(s *terraform.State) error {
if *subnet.AssignIpv6AddressOnCreation != false {
return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *subnet.AssignIpv6AddressOnCreation)
}
return nil
}
}
func testAccCheckAwsSubnetNotRecreated(t *testing.T,
before, after *ec2.Subnet) resource.TestCheckFunc {
return func(s *terraform.State) error {
if *before.SubnetId != *after.SubnetId {
t.Fatalf("Expected SubnetIDs not to change, but both got before: %s and after: %s", *before.SubnetId, *after.SubnetId)
}
return nil
}
}
func testAccCheckSubnetDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).ec2conn
@ -187,7 +210,25 @@ resource "aws_subnet" "foo" {
}
`
const testAccSubnetConfigIpv6Updated = `
const testAccSubnetConfigIpv6UpdateAssignIpv6OnCreation = `
resource "aws_vpc" "foo" {
cidr_block = "10.10.0.0/16"
assign_generated_ipv6_cidr_block = true
}
resource "aws_subnet" "foo" {
cidr_block = "10.10.1.0/24"
vpc_id = "${aws_vpc.foo.id}"
ipv6_cidr_block = "${cidrsubnet(aws_vpc.foo.ipv6_cidr_block, 8, 1)}"
map_public_ip_on_launch = true
assign_ipv6_address_on_creation = false
tags {
Name = "tf-subnet-acc-test"
}
}
`
const testAccSubnetConfigIpv6UpdateIpv6Cidr = `
resource "aws_vpc" "foo" {
cidr_block = "10.10.0.0/16"
assign_generated_ipv6_cidr_block = true

View File

@ -80,17 +80,17 @@ func resourceAwsWafIPSetRead(d *schema.ResourceData, meta interface{}) error {
return err
}
var IPSetDescriptors []map[string]interface{}
var descriptors []map[string]interface{}
for _, IPSetDescriptor := range resp.IPSet.IPSetDescriptors {
IPSet := map[string]interface{}{
"type": *IPSetDescriptor.Type,
"value": *IPSetDescriptor.Value,
for _, descriptor := range resp.IPSet.IPSetDescriptors {
d := map[string]interface{}{
"type": *descriptor.Type,
"value": *descriptor.Value,
}
IPSetDescriptors = append(IPSetDescriptors, IPSet)
descriptors = append(descriptors, d)
}
d.Set("ip_set_descriptors", IPSetDescriptors)
d.Set("ip_set_descriptors", descriptors)
d.Set("name", resp.IPSet.Name)
@ -98,22 +98,36 @@ func resourceAwsWafIPSetRead(d *schema.ResourceData, meta interface{}) error {
}
func resourceAwsWafIPSetUpdate(d *schema.ResourceData, meta interface{}) error {
err := updateIPSetResource(d, meta, waf.ChangeActionInsert)
if err != nil {
return fmt.Errorf("Error Updating WAF IPSet: %s", err)
conn := meta.(*AWSClient).wafconn
if d.HasChange("ip_set_descriptors") {
o, n := d.GetChange("ip_set_descriptors")
oldD, newD := o.(*schema.Set).List(), n.(*schema.Set).List()
err := updateWafIpSetDescriptors(d.Id(), oldD, newD, conn)
if err != nil {
return fmt.Errorf("Error Updating WAF IPSet: %s", err)
}
}
return resourceAwsWafIPSetRead(d, meta)
}
func resourceAwsWafIPSetDelete(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).wafconn
err := updateIPSetResource(d, meta, waf.ChangeActionDelete)
if err != nil {
return fmt.Errorf("Error Removing IPSetDescriptors: %s", err)
oldDescriptors := d.Get("ip_set_descriptors").(*schema.Set).List()
if len(oldDescriptors) > 0 {
noDescriptors := []interface{}{}
err := updateWafIpSetDescriptors(d.Id(), oldDescriptors, noDescriptors, conn)
if err != nil {
return fmt.Errorf("Error updating IPSetDescriptors: %s", err)
}
}
wr := newWafRetryer(conn, "global")
_, err = wr.RetryWithToken(func(token *string) (interface{}, error) {
_, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.DeleteIPSetInput{
ChangeToken: token,
IPSetId: aws.String(d.Id()),
@ -128,29 +142,15 @@ func resourceAwsWafIPSetDelete(d *schema.ResourceData, meta interface{}) error {
return nil
}
func updateIPSetResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error {
conn := meta.(*AWSClient).wafconn
func updateWafIpSetDescriptors(id string, oldD, newD []interface{}, conn *waf.WAF) error {
wr := newWafRetryer(conn, "global")
_, err := wr.RetryWithToken(func(token *string) (interface{}, error) {
req := &waf.UpdateIPSetInput{
ChangeToken: token,
IPSetId: aws.String(d.Id()),
IPSetId: aws.String(id),
Updates: diffWafIpSetDescriptors(oldD, newD),
}
IPSetDescriptors := d.Get("ip_set_descriptors").(*schema.Set)
for _, IPSetDescriptor := range IPSetDescriptors.List() {
IPSet := IPSetDescriptor.(map[string]interface{})
IPSetUpdate := &waf.IPSetUpdate{
Action: aws.String(ChangeAction),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String(IPSet["type"].(string)),
Value: aws.String(IPSet["value"].(string)),
},
}
req.Updates = append(req.Updates, IPSetUpdate)
}
log.Printf("[INFO] Updating IPSet descriptors: %s", req)
return conn.UpdateIPSet(req)
})
if err != nil {
@ -159,3 +159,37 @@ func updateIPSetResource(d *schema.ResourceData, meta interface{}, ChangeAction
return nil
}
func diffWafIpSetDescriptors(oldD, newD []interface{}) []*waf.IPSetUpdate {
updates := make([]*waf.IPSetUpdate, 0)
for _, od := range oldD {
descriptor := od.(map[string]interface{})
if idx, contains := sliceContainsMap(newD, descriptor); contains {
newD = append(newD[:idx], newD[idx+1:]...)
continue
}
updates = append(updates, &waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String(descriptor["type"].(string)),
Value: aws.String(descriptor["value"].(string)),
},
})
}
for _, nd := range newD {
descriptor := nd.(map[string]interface{})
updates = append(updates, &waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String(descriptor["type"].(string)),
Value: aws.String(descriptor["value"].(string)),
},
})
}
return updates
}

View File

@ -2,6 +2,7 @@ package aws
import (
"fmt"
"reflect"
"testing"
"github.com/hashicorp/terraform/helper/resource"
@ -96,6 +97,169 @@ func TestAccAWSWafIPSet_changeNameForceNew(t *testing.T) {
})
}
func TestAccAWSWafIPSet_changeDescriptors(t *testing.T) {
var before, after waf.IPSet
ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafIPSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafIPSetConfig(ipsetName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &before),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "name", ipsetName),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.4037960608.type", "IPV4"),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.4037960608.value", "192.0.7.0/24"),
),
},
{
Config: testAccAWSWafIPSetConfigChangeIPSetDescriptors(ipsetName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &after),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "name", ipsetName),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.#", "1"),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.115741513.type", "IPV4"),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.115741513.value", "192.0.8.0/24"),
),
},
},
})
}
func TestAccAWSWafIPSet_noDescriptors(t *testing.T) {
var ipset waf.IPSet
ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSWafIPSetDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSWafIPSetConfig_noDescriptors(ipsetName),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &ipset),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "name", ipsetName),
resource.TestCheckResourceAttr(
"aws_waf_ipset.ipset", "ip_set_descriptors.#", "0"),
),
},
},
})
}
func TestDiffWafIpSetDescriptors(t *testing.T) {
testCases := []struct {
Old []interface{}
New []interface{}
ExpectedUpdates []*waf.IPSetUpdate
}{
{
// Change
Old: []interface{}{
map[string]interface{}{"type": "IPV4", "value": "192.0.7.0/24"},
},
New: []interface{}{
map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"},
},
ExpectedUpdates: []*waf.IPSetUpdate{
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("192.0.7.0/24"),
},
},
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("192.0.8.0/24"),
},
},
},
},
{
// Fresh IPSet
Old: []interface{}{},
New: []interface{}{
map[string]interface{}{"type": "IPV4", "value": "10.0.1.0/24"},
map[string]interface{}{"type": "IPV4", "value": "10.0.2.0/24"},
map[string]interface{}{"type": "IPV4", "value": "10.0.3.0/24"},
},
ExpectedUpdates: []*waf.IPSetUpdate{
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("10.0.1.0/24"),
},
},
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("10.0.2.0/24"),
},
},
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionInsert),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("10.0.3.0/24"),
},
},
},
},
{
// Deletion
Old: []interface{}{
map[string]interface{}{"type": "IPV4", "value": "192.0.7.0/24"},
map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"},
},
New: []interface{}{},
ExpectedUpdates: []*waf.IPSetUpdate{
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("192.0.7.0/24"),
},
},
&waf.IPSetUpdate{
Action: aws.String(waf.ChangeActionDelete),
IPSetDescriptor: &waf.IPSetDescriptor{
Type: aws.String("IPV4"),
Value: aws.String("192.0.8.0/24"),
},
},
},
},
}
for i, tc := range testCases {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
updates := diffWafIpSetDescriptors(tc.Old, tc.New)
if !reflect.DeepEqual(updates, tc.ExpectedUpdates) {
t.Fatalf("IPSet updates don't match.\nGiven: %s\nExpected: %s",
updates, tc.ExpectedUpdates)
}
})
}
}
func testAccCheckAWSWafIPSetDisappears(v *waf.IPSet) resource.TestCheckFunc {
return func(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).wafconn
@ -228,3 +392,9 @@ func testAccAWSWafIPSetConfigChangeIPSetDescriptors(name string) string {
}
}`, name)
}
func testAccAWSWafIPSetConfig_noDescriptors(name string) string {
return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" {
name = "%s"
}`, name)
}

View File

@ -24,9 +24,10 @@ func resourceAwsWafRule() *schema.Resource {
ForceNew: true,
},
"metric_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateWafMetricName,
},
"predicates": &schema.Schema{
Type: schema.TypeSet,

View File

@ -37,9 +37,10 @@ func resourceAwsWafWebAcl() *schema.Resource {
},
},
"metric_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateWafMetricName,
},
"rules": &schema.Schema{
Type: schema.TypeSet,

View File

@ -14,6 +14,7 @@ import (
"github.com/aws/aws-sdk-go/service/autoscaling"
"github.com/aws/aws-sdk-go/service/cloudformation"
"github.com/aws/aws-sdk-go/service/cloudwatchlogs"
"github.com/aws/aws-sdk-go/service/cognitoidentity"
"github.com/aws/aws-sdk-go/service/configservice"
"github.com/aws/aws-sdk-go/service/directoryservice"
"github.com/aws/aws-sdk-go/service/ec2"
@ -1925,3 +1926,104 @@ func flattenApiGatewayUsagePlanQuota(s *apigateway.QuotaSettings) []map[string]i
return []map[string]interface{}{settings}
}
func buildApiGatewayInvokeURL(restApiId, region, stageName string) string {
return fmt.Sprintf("https://%s.execute-api.%s.amazonaws.com/%s",
restApiId, region, stageName)
}
func buildApiGatewayExecutionARN(restApiId, region, accountId string) (string, error) {
if accountId == "" {
return "", fmt.Errorf("Unable to build execution ARN for %s as account ID is missing",
restApiId)
}
return fmt.Sprintf("arn:aws:execute-api:%s:%s:%s",
region, accountId, restApiId), nil
}
func expandCognitoSupportedLoginProviders(config map[string]interface{}) map[string]*string {
m := map[string]*string{}
for k, v := range config {
s := v.(string)
m[k] = &s
}
return m
}
func flattenCognitoSupportedLoginProviders(config map[string]*string) map[string]string {
m := map[string]string{}
for k, v := range config {
m[k] = *v
}
return m
}
func expandCognitoIdentityProviders(s *schema.Set) []*cognitoidentity.Provider {
ips := make([]*cognitoidentity.Provider, 0)
for _, v := range s.List() {
s := v.(map[string]interface{})
ip := &cognitoidentity.Provider{}
if sv, ok := s["client_id"].(string); ok {
ip.ClientId = aws.String(sv)
}
if sv, ok := s["provider_name"].(string); ok {
ip.ProviderName = aws.String(sv)
}
if sv, ok := s["server_side_token_check"].(bool); ok {
ip.ServerSideTokenCheck = aws.Bool(sv)
}
ips = append(ips, ip)
}
return ips
}
func flattenCognitoIdentityProviders(ips []*cognitoidentity.Provider) []map[string]interface{} {
values := make([]map[string]interface{}, 0)
for _, v := range ips {
ip := make(map[string]interface{})
if v == nil {
return nil
}
if v.ClientId != nil {
ip["client_id"] = *v.ClientId
}
if v.ProviderName != nil {
ip["provider_name"] = *v.ProviderName
}
if v.ServerSideTokenCheck != nil {
ip["server_side_token_check"] = *v.ServerSideTokenCheck
}
values = append(values, ip)
}
return values
}
func buildLambdaInvokeArn(lambdaArn, region string) string {
apiVersion := "2015-03-31"
return fmt.Sprintf("arn:aws:apigateway:%s:lambda:path/%s/functions/%s/invocations",
region, apiVersion, lambdaArn)
}
func sliceContainsMap(l []interface{}, m map[string]interface{}) (int, bool) {
for i, t := range l {
if reflect.DeepEqual(m, t.(map[string]interface{})) {
return i, true
}
}
return -1, false
}

View File

@ -0,0 +1,69 @@
package aws
import (
"log"
"regexp"
"github.com/aws/aws-sdk-go/aws"
)
// diffTags takes our tags locally and the ones remotely and returns
// the set of tags that must be created, and the set of tags that must
// be destroyed.
func diffTagsGeneric(oldTags, newTags map[string]interface{}) (map[string]*string, map[string]*string) {
// First, we're creating everything we have
create := make(map[string]*string)
for k, v := range newTags {
create[k] = aws.String(v.(string))
}
// Build the map of what to remove
remove := make(map[string]*string)
for k, v := range oldTags {
old, ok := create[k]
if !ok || old != aws.String(v.(string)) {
// Delete it!
remove[k] = aws.String(v.(string))
}
}
return create, remove
}
// tagsFromMap returns the tags for the given map of data.
func tagsFromMapGeneric(m map[string]interface{}) map[string]*string {
result := make(map[string]*string)
for k, v := range m {
if !tagIgnoredGeneric(k) {
result[k] = aws.String(v.(string))
}
}
return result
}
// tagsToMap turns the tags into a map.
func tagsToMapGeneric(ts map[string]*string) map[string]string {
result := make(map[string]string)
for k, v := range ts {
if !tagIgnoredGeneric(k) {
result[k] = aws.StringValue(v)
}
}
return result
}
// compare a tag against a list of strings and checks if it should
// be ignored or not
func tagIgnoredGeneric(k string) bool {
filter := []string{"^aws:*"}
for _, v := range filter {
log.Printf("[DEBUG] Matching %v with %v\n", v, k)
if r, _ := regexp.MatchString(v, k); r == true {
log.Printf("[DEBUG] Found AWS specific tag %s, ignoring.\n", k)
return true
}
}
return false
}

View File

@ -0,0 +1,73 @@
package aws
import (
"reflect"
"testing"
"github.com/aws/aws-sdk-go/aws"
)
// go test -v -run="TestDiffGenericTags"
func TestDiffGenericTags(t *testing.T) {
cases := []struct {
Old, New map[string]interface{}
Create, Remove map[string]string
}{
// Basic add/remove
{
Old: map[string]interface{}{
"foo": "bar",
},
New: map[string]interface{}{
"bar": "baz",
},
Create: map[string]string{
"bar": "baz",
},
Remove: map[string]string{
"foo": "bar",
},
},
// Modify
{
Old: map[string]interface{}{
"foo": "bar",
},
New: map[string]interface{}{
"foo": "baz",
},
Create: map[string]string{
"foo": "baz",
},
Remove: map[string]string{
"foo": "bar",
},
},
}
for i, tc := range cases {
c, r := diffTagsGeneric(tc.Old, tc.New)
cm := tagsToMapGeneric(c)
rm := tagsToMapGeneric(r)
if !reflect.DeepEqual(cm, tc.Create) {
t.Fatalf("%d: bad create: %#v", i, cm)
}
if !reflect.DeepEqual(rm, tc.Remove) {
t.Fatalf("%d: bad remove: %#v", i, rm)
}
}
}
// go test -v -run="TestIgnoringTagsGeneric"
func TestIgnoringTagsGeneric(t *testing.T) {
ignoredTags := map[string]*string{
"aws:cloudformation:logical-id": aws.String("foo"),
"aws:foo:bar": aws.String("baz"),
}
for k, v := range ignoredTags {
if !tagIgnoredGeneric(k) {
t.Fatalf("Tag %v with value %v not ignored, but should be!", k, *v)
}
}
}

View File

@ -0,0 +1,50 @@
package aws
import (
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/lambda"
"github.com/hashicorp/terraform/helper/schema"
)
// setTags is a helper to set the tags for a resource. It expects the
// tags field to be named "tags"
func setTagsLambda(conn *lambda.Lambda, d *schema.ResourceData, arn string) error {
if d.HasChange("tags") {
oraw, nraw := d.GetChange("tags")
o := oraw.(map[string]interface{})
n := nraw.(map[string]interface{})
create, remove := diffTagsGeneric(o, n)
// Set tags
if len(remove) > 0 {
log.Printf("[DEBUG] Removing tags: %#v", remove)
keys := make([]*string, 0, len(remove))
for k := range remove {
keys = append(keys, aws.String(k))
}
_, err := conn.UntagResource(&lambda.UntagResourceInput{
Resource: aws.String(arn),
TagKeys: keys,
})
if err != nil {
return err
}
}
if len(create) > 0 {
log.Printf("[DEBUG] Creating tags: %#v", create)
_, err := conn.TagResource(&lambda.TagResourceInput{
Resource: aws.String(arn),
Tags: create,
})
if err != nil {
return err
}
}
}
return nil
}

View File

@ -1218,3 +1218,86 @@ func validateAwsKmsName(v interface{}, k string) (ws []string, es []error) {
}
return
}
func validateCognitoIdentityPoolName(v interface{}, k string) (ws []string, errors []error) {
val := v.(string)
if !regexp.MustCompile("^[\\w _]+$").MatchString(val) {
errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters and spaces", k))
}
return
}
func validateCognitoProviderDeveloperName(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if len(value) > 100 {
errors = append(errors, fmt.Errorf("%q cannot be longer than 100 caracters", k))
}
if !regexp.MustCompile("^[\\w._-]+$").MatchString(value) {
errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters, dots, underscores and hyphens", k))
}
return
}
func validateCognitoSupportedLoginProviders(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if len(value) < 1 {
errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k))
}
if len(value) > 128 {
errors = append(errors, fmt.Errorf("%q cannot be longer than 128 caracters", k))
}
if !regexp.MustCompile("^[\\w.;_/-]+$").MatchString(value) {
errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters, dots, semicolons, underscores, slashes and hyphens", k))
}
return
}
func validateCognitoIdentityProvidersClientId(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if len(value) < 1 {
errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k))
}
if len(value) > 128 {
errors = append(errors, fmt.Errorf("%q cannot be longer than 128 caracters", k))
}
if !regexp.MustCompile("^[\\w_]+$").MatchString(value) {
errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters and underscores", k))
}
return
}
func validateCognitoIdentityProvidersProviderName(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if len(value) < 1 {
errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k))
}
if len(value) > 128 {
errors = append(errors, fmt.Errorf("%q cannot be longer than 128 caracters", k))
}
if !regexp.MustCompile("^[\\w._:/-]+$").MatchString(value) {
errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters, dots, underscores, colons, slashes and hyphens", k))
}
return
}
func validateWafMetricName(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if !regexp.MustCompile(`^[0-9A-Za-z]+$`).MatchString(value) {
errors = append(errors, fmt.Errorf(
"Only alphanumeric characters allowed in %q: %q",
k, value))
}
return
}

View File

@ -2011,5 +2011,201 @@ func TestValidateAwsKmsName(t *testing.T) {
t.Fatalf("AWS KMS Alias Name validation failed: %v", errors)
}
}
}
func TestValidateCognitoIdentityPoolName(t *testing.T) {
validValues := []string{
"123",
"1 2 3",
"foo",
"foo bar",
"foo_bar",
"1foo 2bar 3",
}
for _, s := range validValues {
_, errors := validateCognitoIdentityPoolName(s, "identity_pool_name")
if len(errors) > 0 {
t.Fatalf("%q should be a valid Cognito Identity Pool Name: %v", s, errors)
}
}
invalidValues := []string{
"1-2-3",
"foo!",
"foo-bar",
"foo-bar",
"foo1-bar2",
}
for _, s := range invalidValues {
_, errors := validateCognitoIdentityPoolName(s, "identity_pool_name")
if len(errors) == 0 {
t.Fatalf("%q should not be a valid Cognito Identity Pool Name: %v", s, errors)
}
}
}
func TestValidateCognitoProviderDeveloperName(t *testing.T) {
validValues := []string{
"1",
"foo",
"1.2",
"foo1-bar2-baz3",
"foo_bar",
}
for _, s := range validValues {
_, errors := validateCognitoProviderDeveloperName(s, "developer_provider_name")
if len(errors) > 0 {
t.Fatalf("%q should be a valid Cognito Provider Developer Name: %v", s, errors)
}
}
invalidValues := []string{
"foo!",
"foo:bar",
"foo/bar",
"foo;bar",
}
for _, s := range invalidValues {
_, errors := validateCognitoProviderDeveloperName(s, "developer_provider_name")
if len(errors) == 0 {
t.Fatalf("%q should not be a valid Cognito Provider Developer Name: %v", s, errors)
}
}
}
func TestValidateCognitoSupportedLoginProviders(t *testing.T) {
validValues := []string{
"foo",
"7346241598935552",
"123456789012.apps.googleusercontent.com",
"foo_bar",
"foo;bar",
"foo/bar",
"foo-bar",
"xvz1evFS4wEEPTGEFPHBog;kAcSOqF21Fu85e7zjz7ZN2U4ZRhfV3WpwPAoE3Z7kBw",
strings.Repeat("W", 128),
}
for _, s := range validValues {
_, errors := validateCognitoSupportedLoginProviders(s, "supported_login_providers")
if len(errors) > 0 {
t.Fatalf("%q should be a valid Cognito Supported Login Providers: %v", s, errors)
}
}
invalidValues := []string{
"",
strings.Repeat("W", 129), // > 128
"foo:bar_baz",
"foobar,foobaz",
"foobar=foobaz",
}
for _, s := range invalidValues {
_, errors := validateCognitoSupportedLoginProviders(s, "supported_login_providers")
if len(errors) == 0 {
t.Fatalf("%q should not be a valid Cognito Supported Login Providers: %v", s, errors)
}
}
}
func TestValidateCognitoIdentityProvidersClientId(t *testing.T) {
validValues := []string{
"7lhlkkfbfb4q5kpp90urffao",
"12345678",
"foo_123",
strings.Repeat("W", 128),
}
for _, s := range validValues {
_, errors := validateCognitoIdentityProvidersClientId(s, "client_id")
if len(errors) > 0 {
t.Fatalf("%q should be a valid Cognito Identity Provider Client ID: %v", s, errors)
}
}
invalidValues := []string{
"",
strings.Repeat("W", 129), // > 128
"foo-bar",
"foo:bar",
"foo;bar",
}
for _, s := range invalidValues {
_, errors := validateCognitoIdentityProvidersClientId(s, "client_id")
if len(errors) == 0 {
t.Fatalf("%q should not be a valid Cognito Identity Provider Client ID: %v", s, errors)
}
}
}
func TestValidateCognitoIdentityProvidersProviderName(t *testing.T) {
validValues := []string{
"foo",
"7346241598935552",
"foo_bar",
"foo:bar",
"foo/bar",
"foo-bar",
"cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu",
strings.Repeat("W", 128),
}
for _, s := range validValues {
_, errors := validateCognitoIdentityProvidersProviderName(s, "provider_name")
if len(errors) > 0 {
t.Fatalf("%q should be a valid Cognito Identity Provider Name: %v", s, errors)
}
}
invalidValues := []string{
"",
strings.Repeat("W", 129), // > 128
"foo;bar_baz",
"foobar,foobaz",
"foobar=foobaz",
}
for _, s := range invalidValues {
_, errors := validateCognitoIdentityProvidersProviderName(s, "provider_name")
if len(errors) == 0 {
t.Fatalf("%q should not be a valid Cognito Identity Provider Name: %v", s, errors)
}
}
}
func TestValidateWafMetricName(t *testing.T) {
validNames := []string{
"testrule",
"testRule",
"testRule123",
}
for _, v := range validNames {
_, errors := validateWafMetricName(v, "name")
if len(errors) != 0 {
t.Fatalf("%q should be a valid WAF metric name: %q", v, errors)
}
}
invalidNames := []string{
"!",
"/",
" ",
":",
";",
"white space",
"/slash-at-the-beginning",
"slash-at-the-end/",
}
for _, v := range invalidNames {
_, errors := validateWafMetricName(v, "name")
if len(errors) == 0 {
t.Fatalf("%q should be an invalid WAF metric name", v)
}
}
}

View File

@ -281,14 +281,17 @@ func resourceArmRedisCacheRead(d *schema.ResourceData, meta interface{}) error {
name := id.Path["Redis"]
resp, err := client.Get(resGroup, name)
if err != nil {
return fmt.Errorf("Error making Read request on Azure Redis Cache %s: %s", name, err)
}
// covers if the resource has been deleted outside of TF, but is still in the state
if resp.StatusCode == http.StatusNotFound {
d.SetId("")
return nil
}
if err != nil {
return fmt.Errorf("Error making Read request on Azure Redis Cache %s: %s", name, err)
}
keysResp, err := client.ListKeys(resGroup, name)
if err != nil {
return fmt.Errorf("Error making ListKeys request on Azure Redis Cache %s: %s", name, err)

View File

@ -6,7 +6,6 @@ import (
"strings"
"github.com/circonus-labs/circonus-gometrics/api"
"github.com/circonus-labs/circonus-gometrics/api/config"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/schema"
@ -85,9 +84,8 @@ func resourceMetricCluster() *schema.Resource {
// Out parameters
metricClusterIDAttr: &schema.Schema{
Computed: true,
Type: schema.TypeString,
ValidateFunc: validateRegexp(metricClusterIDAttr, config.MetricClusterCIDRegex),
Computed: true,
Type: schema.TypeString,
},
}),
}

View File

@ -181,9 +181,6 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfACLDisabledTTL: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfACLDisabledTTL, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfACLDownPolicy: {
Computed: true,
@ -196,9 +193,6 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfACLTTL: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfACLTTL, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfAddresses: {
Computed: true,
@ -275,23 +269,14 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfCheckDeregisterIntervalMin: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfCheckDeregisterIntervalMin, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfCheckReapInterval: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfCheckReapInterval, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfCheckUpdateInterval: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfCheckUpdateInterval, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfClientAddr: {
Computed: true,
@ -317,16 +302,10 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfDNSMaxStale: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfDNSMaxStale, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfDNSNodeTTL: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfDNSNodeTTL, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfDNSOnlyPassing: {
Computed: true,
@ -335,16 +314,10 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfDNSRecursorTimeout: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfDNSRecursorTimeout, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfDNSServiceTTL: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfDNSServiceTTL, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfDNSUDPAnswerLimit: {
Computed: true,
@ -406,9 +379,6 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfID: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfID, validatorInputs{
validateRegexp(`(?i)^[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}$`),
}),
},
agentSelfLeaveOnInt: {
Computed: true,
@ -434,9 +404,6 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfPerformanceRaftMultiplier: {
Computed: true,
Type: schema.TypeString, // FIXME(sean@): should be schema.TypeInt
ValidateFunc: makeValidationFunc(agentSelfPerformanceRaftMultiplier, validatorInputs{
validateIntMin(0),
}),
},
},
},
@ -453,58 +420,30 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfSchemaPortsDNS: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsDNS, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
agentSelfSchemaPortsHTTP: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsHTTP, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
agentSelfSchemaPortsHTTPS: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsHTTPS, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
agentSelfSchemaPortsRPC: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsRPC, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
agentSelfSchemaPortsSerfLAN: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsSerfLAN, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
agentSelfSchemaPortsSerfWAN: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsSerfWAN, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
agentSelfSchemaPortsServer: {
Computed: true,
Type: schema.TypeInt,
ValidateFunc: makeValidationFunc(agentSelfSchemaPortsServer, validatorInputs{
validateIntMin(1),
validateIntMax(65535),
}),
},
},
},
@ -516,16 +455,10 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfReconnectTimeoutLAN: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfReconnectTimeoutLAN, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfReconnectTimeoutWAN: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfReconnectTimeoutWAN, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfRejoinAfterLeave: {
Computed: true,
@ -612,9 +545,6 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfSessionTTLMin: {
Computed: true,
Type: schema.TypeString,
ValidateFunc: makeValidationFunc(agentSelfSessionTTLMin, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfStartJoin: {
Computed: true,
@ -702,9 +632,6 @@ func dataSourceConsulAgentSelf() *schema.Resource {
agentSelfTelemetryCirconusSubmissionInterval: &schema.Schema{
Type: schema.TypeString,
Computed: true,
ValidateFunc: makeValidationFunc(agentSelfTelemetryCirconusSubmissionInterval, validatorInputs{
validateDurationMin("0ns"),
}),
},
agentSelfTelemetryEnableHostname: &schema.Schema{
Type: schema.TypeString,

View File

@ -56,14 +56,12 @@ func dataSourceConsulCatalogNodes() *schema.Resource {
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
catalogNodesNodeID: &schema.Schema{
Type: schema.TypeString,
Computed: true,
ValidateFunc: makeValidationFunc(catalogNodesNodeID, []interface{}{validateRegexp(`^[\S]+$`)}),
Type: schema.TypeString,
Computed: true,
},
catalogNodesNodeName: &schema.Schema{
Type: schema.TypeString,
Computed: true,
ValidateFunc: makeValidationFunc(catalogNodesNodeName, []interface{}{validateRegexp(`^[\S]+$`)}),
Type: schema.TypeString,
Computed: true,
},
catalogNodesNodeAddress: &schema.Schema{
Type: schema.TypeString,

View File

@ -0,0 +1,93 @@
package digitalocean
import (
"fmt"
"strconv"
"github.com/digitalocean/godo"
"github.com/hashicorp/terraform/helper/schema"
)
func dataSourceDigitalOceanImage() *schema.Resource {
return &schema.Resource{
Read: dataSourceDigitalOceanImageRead,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
Description: "name of the image",
},
// computed attributes
"image": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Description: "slug or id of the image",
},
"min_disk_size": &schema.Schema{
Type: schema.TypeInt,
Computed: true,
Description: "minimum disk size required by the image",
},
"private": &schema.Schema{
Type: schema.TypeBool,
Computed: true,
Description: "Is the image private or non-private",
},
"regions": &schema.Schema{
Type: schema.TypeList,
Computed: true,
Description: "list of the regions that the image is available in",
Elem: &schema.Schema{Type: schema.TypeString},
},
"type": &schema.Schema{
Type: schema.TypeString,
Computed: true,
Description: "type of the image",
},
},
}
}
func dataSourceDigitalOceanImageRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*godo.Client)
opts := &godo.ListOptions{}
images, _, err := client.Images.ListUser(opts)
if err != nil {
d.SetId("")
return err
}
image, err := findImageByName(images, d.Get("name").(string))
if err != nil {
return err
}
d.SetId(image.Name)
d.Set("name", image.Name)
d.Set("image", strconv.Itoa(image.ID))
d.Set("min_disk_size", image.MinDiskSize)
d.Set("private", !image.Public)
d.Set("regions", image.Regions)
d.Set("type", image.Type)
return nil
}
func findImageByName(images []godo.Image, name string) (*godo.Image, error) {
results := make([]godo.Image, 0)
for _, v := range images {
if v.Name == name {
results = append(results, v)
}
}
if len(results) == 1 {
return &results[0], nil
}
if len(results) == 0 {
return nil, fmt.Errorf("no user image found with name %s", name)
}
return nil, fmt.Errorf("too many user images found with name %s (found %d, expected 1)", name, len(results))
}

View File

@ -0,0 +1,122 @@
package digitalocean
import (
"fmt"
"log"
"regexp"
"testing"
"github.com/digitalocean/godo"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccDigitalOceanImage_Basic(t *testing.T) {
var droplet godo.Droplet
var snapshotsId []int
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckDigitalOceanDropletDestroy,
Steps: []resource.TestStep{
{
Config: testAccCheckDigitalOceanDropletConfig_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckDigitalOceanDropletExists("digitalocean_droplet.foobar", &droplet),
takeSnapshotsOfDroplet(rInt, &droplet, &snapshotsId),
),
},
{
Config: testAccCheckDigitalOceanImageConfig_basic(rInt, 1),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr(
"data.digitalocean_image.foobar", "name", fmt.Sprintf("snap-%d-1", rInt)),
resource.TestCheckResourceAttr(
"data.digitalocean_image.foobar", "min_disk_size", "20"),
resource.TestCheckResourceAttr(
"data.digitalocean_image.foobar", "private", "true"),
resource.TestCheckResourceAttr(
"data.digitalocean_image.foobar", "type", "snapshot"),
),
},
{
Config: testAccCheckDigitalOceanImageConfig_basic(rInt, 0),
ExpectError: regexp.MustCompile(`.*too many user images found with name snap-.*\ .found 2, expected 1.`),
},
{
Config: testAccCheckDigitalOceanImageConfig_nonexisting(rInt),
Destroy: false,
ExpectError: regexp.MustCompile(`.*no user image found with name snap-.*-nonexisting`),
},
{
Config: " ",
Check: resource.ComposeTestCheckFunc(
deleteSnapshots(&snapshotsId),
),
},
},
})
}
func takeSnapshotsOfDroplet(rInt int, droplet *godo.Droplet, snapshotsId *[]int) resource.TestCheckFunc {
return func(s *terraform.State) error {
client := testAccProvider.Meta().(*godo.Client)
for i := 0; i < 3; i++ {
err := takeSnapshotOfDroplet(rInt, i%2, droplet)
if err != nil {
return err
}
}
retrieveDroplet, _, err := client.Droplets.Get((*droplet).ID)
if err != nil {
return err
}
*snapshotsId = retrieveDroplet.SnapshotIDs
return nil
}
}
func takeSnapshotOfDroplet(rInt, sInt int, droplet *godo.Droplet) error {
client := testAccProvider.Meta().(*godo.Client)
action, _, err := client.DropletActions.Snapshot((*droplet).ID, fmt.Sprintf("snap-%d-%d", rInt, sInt))
if err != nil {
return err
}
waitForAction(client, action)
return nil
}
func deleteSnapshots(snapshotsId *[]int) resource.TestCheckFunc {
return func(s *terraform.State) error {
log.Printf("XXX Deleting snaps")
client := testAccProvider.Meta().(*godo.Client)
snapshots := *snapshotsId
for _, value := range snapshots {
log.Printf("XXX Deleting %d", value)
_, err := client.Images.Delete(value)
if err != nil {
return err
}
}
return nil
}
}
func testAccCheckDigitalOceanImageConfig_basic(rInt, sInt int) string {
return fmt.Sprintf(`
data "digitalocean_image" "foobar" {
name = "snap-%d-%d"
}
`, rInt, sInt)
}
func testAccCheckDigitalOceanImageConfig_nonexisting(rInt int) string {
return fmt.Sprintf(`
data "digitalocean_image" "foobar" {
name = "snap-%d-nonexisting"
}
`, rInt)
}

View File

@ -17,6 +17,10 @@ func Provider() terraform.ResourceProvider {
},
},
DataSourcesMap: map[string]*schema.Resource{
"digitalocean_image": dataSourceDigitalOceanImage(),
},
ResourcesMap: map[string]*schema.Resource{
"digitalocean_domain": resourceDigitalOceanDomain(),
"digitalocean_droplet": resourceDigitalOceanDroplet(),

View File

@ -30,6 +30,10 @@ func TestAccDigitalOceanDroplet_Basic(t *testing.T) {
"digitalocean_droplet.foobar", "name", fmt.Sprintf("foo-%d", rInt)),
resource.TestCheckResourceAttr(
"digitalocean_droplet.foobar", "size", "512mb"),
resource.TestCheckResourceAttr(
"digitalocean_droplet.foobar", "price_hourly", "0.00744"),
resource.TestCheckResourceAttr(
"digitalocean_droplet.foobar", "price_monthly", "5"),
resource.TestCheckResourceAttr(
"digitalocean_droplet.foobar", "image", "centos-7-x64"),
resource.TestCheckResourceAttr(
@ -37,6 +41,11 @@ func TestAccDigitalOceanDroplet_Basic(t *testing.T) {
resource.TestCheckResourceAttr(
"digitalocean_droplet.foobar", "user_data", "foobar"),
),
Destroy: false,
},
{
Config: testAccCheckDigitalOceanDropletConfig_basic(rInt),
PlanOnly: true,
},
},
})

View File

@ -647,6 +647,72 @@ func resourceServiceV1() *schema.Resource {
},
},
"gcslogging": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
// Required fields
"name": {
Type: schema.TypeString,
Required: true,
Description: "Unique name to refer to this logging setup",
},
"email": {
Type: schema.TypeString,
Required: true,
Description: "The email address associated with the target GCS bucket on your account.",
},
"bucket_name": {
Type: schema.TypeString,
Required: true,
Description: "The name of the bucket in which to store the logs.",
},
"secret_key": {
Type: schema.TypeString,
Required: true,
Description: "The secret key associated with the target gcs bucket on your account.",
},
// Optional fields
"path": {
Type: schema.TypeString,
Optional: true,
Description: "Path to store the files. Must end with a trailing slash",
},
"gzip_level": {
Type: schema.TypeInt,
Optional: true,
Default: 0,
Description: "Gzip Compression level",
},
"period": {
Type: schema.TypeInt,
Optional: true,
Default: 3600,
Description: "How frequently the logs should be transferred, in seconds (Default 3600)",
},
"format": {
Type: schema.TypeString,
Optional: true,
Default: "%h %l %u %t %r %>s",
Description: "Apache-style string or VCL variables to use for log formatting",
},
"timestamp_format": {
Type: schema.TypeString,
Optional: true,
Default: "%Y-%m-%dT%H:%M:%S.000",
Description: "specified timestamp formatting (default `%Y-%m-%dT%H:%M:%S.000`)",
},
"response_condition": {
Type: schema.TypeString,
Optional: true,
Default: "",
Description: "Name of a condition to apply this logging.",
},
},
},
},
"response_object": {
Type: schema.TypeSet,
Optional: true,
@ -1450,6 +1516,59 @@ func resourceServiceV1Update(d *schema.ResourceData, meta interface{}) error {
}
}
// find difference in gcslogging
if d.HasChange("gcslogging") {
os, ns := d.GetChange("gcslogging")
if os == nil {
os = new(schema.Set)
}
if ns == nil {
ns = new(schema.Set)
}
oss := os.(*schema.Set)
nss := ns.(*schema.Set)
removeGcslogging := oss.Difference(nss).List()
addGcslogging := nss.Difference(oss).List()
// DELETE old gcslogging configurations
for _, pRaw := range removeGcslogging {
sf := pRaw.(map[string]interface{})
opts := gofastly.DeleteGCSInput{
Service: d.Id(),
Version: latestVersion,
Name: sf["name"].(string),
}
log.Printf("[DEBUG] Fastly gcslogging removal opts: %#v", opts)
err := conn.DeleteGCS(&opts)
if err != nil {
return err
}
}
// POST new/updated gcslogging
for _, pRaw := range addGcslogging {
sf := pRaw.(map[string]interface{})
opts := gofastly.CreateGCSInput{
Service: d.Id(),
Version: latestVersion,
Name: sf["name"].(string),
User: sf["email"].(string),
Bucket: sf["bucket_name"].(string),
SecretKey: sf["secret_key"].(string),
Format: sf["format"].(string),
ResponseCondition: sf["response_condition"].(string),
}
log.Printf("[DEBUG] Create GCS Opts: %#v", opts)
_, err := conn.CreateGCS(&opts)
if err != nil {
return err
}
}
}
// find difference in Response Object
if d.HasChange("response_object") {
or, nr := d.GetChange("response_object")
@ -1883,6 +2002,22 @@ func resourceServiceV1Read(d *schema.ResourceData, meta interface{}) error {
log.Printf("[WARN] Error setting Sumologic for (%s): %s", d.Id(), err)
}
// refresh GCS Logging
log.Printf("[DEBUG] Refreshing GCS for (%s)", d.Id())
GCSList, err := conn.ListGCSs(&gofastly.ListGCSsInput{
Service: d.Id(),
Version: s.ActiveVersion.Number,
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up GCS for (%s), version (%s): %s", d.Id(), s.ActiveVersion.Number, err)
}
gcsl := flattenGCS(GCSList)
if err := d.Set("gcs", gcsl); err != nil {
log.Printf("[WARN] Error setting gcs for (%s): %s", d.Id(), err)
}
// refresh Response Objects
log.Printf("[DEBUG] Refreshing Response Object for (%s)", d.Id())
responseObjectList, err := conn.ListResponseObjects(&gofastly.ListResponseObjectsInput{
@ -2350,6 +2485,35 @@ func flattenSumologics(sumologicList []*gofastly.Sumologic) []map[string]interfa
return l
}
func flattenGCS(gcsList []*gofastly.GCS) []map[string]interface{} {
var GCSList []map[string]interface{}
for _, currentGCS := range gcsList {
// Convert gcs to a map for saving to state.
GCSMapString := map[string]interface{}{
"name": currentGCS.Name,
"email": currentGCS.User,
"bucket_name": currentGCS.Bucket,
"secret_key": currentGCS.SecretKey,
"path": currentGCS.Path,
"period": int(currentGCS.Period),
"gzip_level": int(currentGCS.GzipLevel),
"response_condition": currentGCS.ResponseCondition,
"format": currentGCS.Format,
}
// prune any empty values that come from the default string value in structs
for k, v := range GCSMapString {
if v == "" {
delete(GCSMapString, k)
}
}
GCSList = append(GCSList, GCSMapString)
}
return GCSList
}
func flattenResponseObjects(responseObjectList []*gofastly.ResponseObject) []map[string]interface{} {
var rol []map[string]interface{}
for _, ro := range responseObjectList {

View File

@ -0,0 +1,131 @@
package fastly
import (
"fmt"
"reflect"
"testing"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
gofastly "github.com/sethvargo/go-fastly"
)
func TestResourceFastlyFlattenGCS(t *testing.T) {
cases := []struct {
remote []*gofastly.GCS
local []map[string]interface{}
}{
{
remote: []*gofastly.GCS{
&gofastly.GCS{
Name: "GCS collector",
User: "email@example.com",
Bucket: "bucketName",
SecretKey: "secretKey",
Format: "log format",
Period: 3600,
GzipLevel: 0,
},
},
local: []map[string]interface{}{
map[string]interface{}{
"name": "GCS collector",
"email": "email@example.com",
"bucket_name": "bucketName",
"secret_key": "secretKey",
"format": "log format",
"period": 3600,
"gzip_level": 0,
},
},
},
}
for _, c := range cases {
out := flattenGCS(c.remote)
if !reflect.DeepEqual(out, c.local) {
t.Fatalf("Error matching:\nexpected: %#v\ngot: %#v", c.local, out)
}
}
}
func TestAccFastlyServiceV1_gcslogging(t *testing.T) {
var service gofastly.ServiceDetail
name := fmt.Sprintf("tf-test-%s", acctest.RandString(10))
gcsName := fmt.Sprintf("gcs %s", acctest.RandString(10))
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckServiceV1Destroy,
Steps: []resource.TestStep{
{
Config: testAccServiceV1Config_gcs(name, gcsName),
Check: resource.ComposeTestCheckFunc(
testAccCheckServiceV1Exists("fastly_service_v1.foo", &service),
testAccCheckFastlyServiceV1Attributes_gcs(&service, name, gcsName),
),
},
},
})
}
func testAccCheckFastlyServiceV1Attributes_gcs(service *gofastly.ServiceDetail, name, gcsName string) resource.TestCheckFunc {
return func(s *terraform.State) error {
if service.Name != name {
return fmt.Errorf("Bad name, expected (%s), got (%s)", name, service.Name)
}
conn := testAccProvider.Meta().(*FastlyClient).conn
gcsList, err := conn.ListGCSs(&gofastly.ListGCSsInput{
Service: service.ID,
Version: service.ActiveVersion.Number,
})
if err != nil {
return fmt.Errorf("[ERR] Error looking up GCSs for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err)
}
if len(gcsList) != 1 {
return fmt.Errorf("GCS missing, expected: 1, got: %d", len(gcsList))
}
if gcsList[0].Name != gcsName {
return fmt.Errorf("GCS name mismatch, expected: %s, got: %#v", gcsName, gcsList[0].Name)
}
return nil
}
}
func testAccServiceV1Config_gcs(name, gcsName string) string {
backendName := fmt.Sprintf("%s.aws.amazon.com", acctest.RandString(3))
return fmt.Sprintf(`
resource "fastly_service_v1" "foo" {
name = "%s"
domain {
name = "test.notadomain.com"
comment = "tf-testing-domain"
}
backend {
address = "%s"
name = "tf -test backend"
}
gcslogging {
name = "%s"
email = "email@example.com",
bucket_name = "bucketName",
secret_key = "secretKey",
format = "log format",
response_condition = "",
}
force_destroy = true
}`, name, backendName, gcsName)
}

View File

@ -3,6 +3,7 @@ package github
import (
"context"
"errors"
"net/http"
"github.com/google/go-github/github"
"github.com/hashicorp/terraform/helper/schema"
@ -117,8 +118,12 @@ func resourceGithubBranchProtectionRead(d *schema.ResourceData, meta interface{}
githubProtection, _, err := client.Repositories.GetBranchProtection(context.TODO(), meta.(*Organization).name, r, b)
if err != nil {
d.SetId("")
return nil
if err, ok := err.(*github.ErrorResponse); ok && err.Response.StatusCode == http.StatusNotFound {
d.SetId("")
return nil
}
return err
}
d.Set("repository", r)

View File

@ -0,0 +1,65 @@
package google
import (
"testing"
"github.com/hashicorp/terraform/helper/resource"
)
func TestAccComputeNetwork_importBasic(t *testing.T) {
resourceName := "google_compute_network.foobar"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeNetworkDestroy,
Steps: []resource.TestStep{
{
Config: testAccComputeNetwork_basic,
}, {
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
//ImportStateVerifyIgnore: []string{"ipv4_range", "name"},
},
},
})
}
func TestAccComputeNetwork_importAuto_subnet(t *testing.T) {
resourceName := "google_compute_network.bar"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeNetworkDestroy,
Steps: []resource.TestStep{
{
Config: testAccComputeNetwork_auto_subnet,
}, {
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}
func TestAccComputeNetwork_importCustom_subnet(t *testing.T) {
resourceName := "google_compute_network.baz"
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckComputeNetworkDestroy,
Steps: []resource.TestStep{
{
Config: testAccComputeNetwork_custom_subnet,
}, {
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}

View File

@ -88,6 +88,7 @@ func resourceComputeForwardingRule() *schema.Resource {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
ForceNew: true,
Set: schema.HashString,
},

View File

@ -14,6 +14,9 @@ func resourceComputeNetwork() *schema.Resource {
Create: resourceComputeNetworkCreate,
Read: resourceComputeNetworkRead,
Delete: resourceComputeNetworkDelete,
Importer: &schema.ResourceImporter{
State: schema.ImportStatePassthrough,
},
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
@ -142,6 +145,9 @@ func resourceComputeNetworkRead(d *schema.ResourceData, meta interface{}) error
d.Set("gateway_ipv4", network.GatewayIPv4)
d.Set("self_link", network.SelfLink)
d.Set("ipv4_range", network.IPv4Range)
d.Set("name", network.Name)
d.Set("auto_create_subnetworks", network.AutoCreateSubnetworks)
return nil
}

View File

@ -29,7 +29,7 @@ func TestAccRecord_basic(t *testing.T) {
testAccCheckRecordUseClientSubnet(&record, true),
testAccCheckRecordRegionName(&record, []string{"cal"}),
// testAccCheckRecordAnswerMetaWeight(&record, 10),
testAccCheckRecordAnswerRdata(&record, "test1.terraform-record-test.io"),
testAccCheckRecordAnswerRdata(&record, 0, "test1.terraform-record-test.io"),
),
},
},
@ -52,7 +52,7 @@ func TestAccRecord_updated(t *testing.T) {
testAccCheckRecordUseClientSubnet(&record, true),
testAccCheckRecordRegionName(&record, []string{"cal"}),
// testAccCheckRecordAnswerMetaWeight(&record, 10),
testAccCheckRecordAnswerRdata(&record, "test1.terraform-record-test.io"),
testAccCheckRecordAnswerRdata(&record, 0, "test1.terraform-record-test.io"),
),
},
resource.TestStep{
@ -64,7 +64,7 @@ func TestAccRecord_updated(t *testing.T) {
testAccCheckRecordUseClientSubnet(&record, false),
testAccCheckRecordRegionName(&record, []string{"ny", "wa"}),
// testAccCheckRecordAnswerMetaWeight(&record, 5),
testAccCheckRecordAnswerRdata(&record, "test2.terraform-record-test.io"),
testAccCheckRecordAnswerRdata(&record, 0, "test2.terraform-record-test.io"),
),
},
},
@ -85,7 +85,31 @@ func TestAccRecord_SPF(t *testing.T) {
testAccCheckRecordDomain(&record, "terraform-record-test.io"),
testAccCheckRecordTTL(&record, 86400),
testAccCheckRecordUseClientSubnet(&record, true),
testAccCheckRecordAnswerRdata(&record, "v=DKIM1; k=rsa; p=XXXXXXXX"),
testAccCheckRecordAnswerRdata(&record, 0, "v=DKIM1; k=rsa; p=XXXXXXXX"),
),
},
},
})
}
func TestAccRecord_SRV(t *testing.T) {
var record dns.Record
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckRecordDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccRecordSRV,
Check: resource.ComposeTestCheckFunc(
testAccCheckRecordExists("ns1_record.srv", &record),
testAccCheckRecordDomain(&record, "_some-server._tcp.terraform-record-test.io"),
testAccCheckRecordTTL(&record, 86400),
testAccCheckRecordUseClientSubnet(&record, true),
testAccCheckRecordAnswerRdata(&record, 0, "10"),
testAccCheckRecordAnswerRdata(&record, 1, "0"),
testAccCheckRecordAnswerRdata(&record, 2, "2380"),
testAccCheckRecordAnswerRdata(&record, 3, "node-1.terraform-record-test.io"),
),
},
},
@ -206,12 +230,12 @@ func testAccCheckRecordAnswerMetaWeight(r *dns.Record, expected float64) resourc
}
}
func testAccCheckRecordAnswerRdata(r *dns.Record, expected string) resource.TestCheckFunc {
func testAccCheckRecordAnswerRdata(r *dns.Record, idx int, expected string) resource.TestCheckFunc {
return func(s *terraform.State) error {
recordAnswer := r.Answers[0]
recordAnswerString := recordAnswer.Rdata[0]
recordAnswerString := recordAnswer.Rdata[idx]
if recordAnswerString != expected {
return fmt.Errorf("Answers[0].Rdata[0]: got: %#v want: %#v", recordAnswerString, expected)
return fmt.Errorf("Answers[0].Rdata[%d]: got: %#v want: %#v", idx, recordAnswerString, expected)
}
return nil
}
@ -335,3 +359,20 @@ resource "ns1_zone" "test" {
zone = "terraform-record-test.io"
}
`
const testAccRecordSRV = `
resource "ns1_record" "srv" {
zone = "${ns1_zone.test.zone}"
domain = "_some-server._tcp.${ns1_zone.test.zone}"
type = "SRV"
ttl = 86400
use_client_subnet = "true"
answers {
answer = "10 0 2380 node-1.${ns1_zone.test.zone}"
}
}
resource "ns1_zone" "test" {
zone = "terraform-record-test.io"
}
`

View File

@ -0,0 +1,24 @@
package oneandone
import (
"github.com/1and1/oneandone-cloudserver-sdk-go"
)
type Config struct {
Token string
Retries int
Endpoint string
API *oneandone.API
}
func (c *Config) Client() (*Config, error) {
token := oneandone.SetToken(c.Token)
if len(c.Endpoint) > 0 {
c.API = oneandone.New(token, c.Endpoint)
} else {
c.API = oneandone.New(token, oneandone.BaseUrl)
}
return c, nil
}

View File

@ -0,0 +1,56 @@
package oneandone
import (
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
func Provider() terraform.ResourceProvider {
return &schema.Provider{
Schema: map[string]*schema.Schema{
"token": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("ONEANDONE_TOKEN", nil),
Description: "1&1 token for API operations.",
},
"retries": {
Type: schema.TypeInt,
Optional: true,
Default: 50,
DefaultFunc: schema.EnvDefaultFunc("ONEANDONE_RETRIES", nil),
},
"endpoint": {
Type: schema.TypeString,
Optional: true,
Default: oneandone.BaseUrl,
DefaultFunc: schema.EnvDefaultFunc("ONEANDONE_ENDPOINT", nil),
},
},
ResourcesMap: map[string]*schema.Resource{
"oneandone_server": resourceOneandOneServer(),
"oneandone_firewall_policy": resourceOneandOneFirewallPolicy(),
"oneandone_private_network": resourceOneandOnePrivateNetwork(),
"oneandone_public_ip": resourceOneandOnePublicIp(),
"oneandone_shared_storage": resourceOneandOneSharedStorage(),
"oneandone_monitoring_policy": resourceOneandOneMonitoringPolicy(),
"oneandone_loadbalancer": resourceOneandOneLoadbalancer(),
"oneandone_vpn": resourceOneandOneVPN(),
},
ConfigureFunc: providerConfigure,
}
}
func providerConfigure(d *schema.ResourceData) (interface{}, error) {
var endpoint string
if d.Get("endpoint").(string) != oneandone.BaseUrl {
endpoint = d.Get("endpoint").(string)
}
config := Config{
Token: d.Get("token").(string),
Retries: d.Get("retries").(int),
Endpoint: endpoint,
}
return config.Client()
}

View File

@ -0,0 +1,36 @@
package oneandone
import (
"os"
"testing"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform"
)
//
var testAccProviders map[string]terraform.ResourceProvider
var testAccProvider *schema.Provider
func init() {
testAccProvider = Provider().(*schema.Provider)
testAccProviders = map[string]terraform.ResourceProvider{
"oneandone": testAccProvider,
}
}
func TestProvider(t *testing.T) {
if err := Provider().(*schema.Provider).InternalValidate(); err != nil {
t.Fatalf("err: %s", err)
}
}
func TestProvider_impl(t *testing.T) {
var _ terraform.ResourceProvider = Provider()
}
func testAccPreCheck(t *testing.T) {
if v := os.Getenv("ONEANDONE_TOKEN"); v == "" {
t.Fatal("ONEANDONE_TOKEN must be set for acceptance tests")
}
}

View File

@ -0,0 +1,274 @@
package oneandone
import (
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
"strings"
)
func resourceOneandOneFirewallPolicy() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOneFirewallCreate,
Read: resourceOneandOneFirewallRead,
Update: resourceOneandOneFirewallUpdate,
Delete: resourceOneandOneFirewallDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"rules": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"protocol": {
Type: schema.TypeString,
Required: true,
},
"port_from": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntBetween(1, 65535),
},
"port_to": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntBetween(1, 65535),
},
"source_ip": {
Type: schema.TypeString,
Optional: true,
},
"id": {
Type: schema.TypeString,
Computed: true,
},
},
},
Required: true,
},
},
}
}
func resourceOneandOneFirewallCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
req := oneandone.FirewallPolicyRequest{
Name: d.Get("name").(string),
}
if desc, ok := d.GetOk("description"); ok {
req.Description = desc.(string)
}
req.Rules = getRules(d)
fw_id, fw, err := config.API.CreateFirewallPolicy(&req)
if err != nil {
return err
}
err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
d.SetId(fw_id)
if err != nil {
return err
}
return resourceOneandOneFirewallRead(d, meta)
}
func resourceOneandOneFirewallUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("name") || d.HasChange("description") {
fw, err := config.API.UpdateFirewallPolicy(d.Id(), d.Get("name").(string), d.Get("description").(string))
if err != nil {
return err
}
err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
if d.HasChange("rules") {
oldR, newR := d.GetChange("rules")
oldValues := oldR.([]interface{})
newValues := newR.([]interface{})
if len(oldValues) > len(newValues) {
diff := difference(oldValues, newValues)
for _, old := range diff {
o := old.(map[string]interface{})
if o["id"] != nil {
old_id := o["id"].(string)
fw, err := config.API.DeleteFirewallPolicyRule(d.Id(), old_id)
if err != nil {
return err
}
err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
}
} else {
var rules []oneandone.FirewallPolicyRule
for _, raw := range newValues {
rl := raw.(map[string]interface{})
if rl["id"].(string) == "" {
rule := oneandone.FirewallPolicyRule{
Protocol: rl["protocol"].(string),
}
if rl["port_from"] != nil {
rule.PortFrom = oneandone.Int2Pointer(rl["port_from"].(int))
}
if rl["port_to"] != nil {
rule.PortTo = oneandone.Int2Pointer(rl["port_to"].(int))
}
if rl["source_ip"] != nil {
rule.SourceIp = rl["source_ip"].(string)
}
rules = append(rules, rule)
}
}
if len(rules) > 0 {
fw, err := config.API.AddFirewallPolicyRules(d.Id(), rules)
if err != nil {
return err
}
err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries)
}
}
}
return resourceOneandOneFirewallRead(d, meta)
}
func resourceOneandOneFirewallRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
fw, err := config.API.GetFirewallPolicy(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
d.Set("rules", readRules(d, fw.Rules))
d.Set("description", fw.Description)
return nil
}
func resourceOneandOneFirewallDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
fp, err := config.API.DeleteFirewallPolicy(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(fp)
if err != nil {
return err
}
return nil
}
func readRules(d *schema.ResourceData, rules []oneandone.FirewallPolicyRule) interface{} {
rawRules := d.Get("rules").([]interface{})
counter := 0
for _, rR := range rawRules {
if len(rules) > counter {
rawMap := rR.(map[string]interface{})
rawMap["id"] = rules[counter].Id
if rules[counter].SourceIp != "0.0.0.0" {
rawMap["source_ip"] = rules[counter].SourceIp
}
}
counter++
}
return rawRules
}
func getRules(d *schema.ResourceData) []oneandone.FirewallPolicyRule {
var rules []oneandone.FirewallPolicyRule
if raw, ok := d.GetOk("rules"); ok {
rawRules := raw.([]interface{})
for _, raw := range rawRules {
rl := raw.(map[string]interface{})
rule := oneandone.FirewallPolicyRule{
Protocol: rl["protocol"].(string),
}
if rl["port_from"] != nil {
rule.PortFrom = oneandone.Int2Pointer(rl["port_from"].(int))
}
if rl["port_to"] != nil {
rule.PortTo = oneandone.Int2Pointer(rl["port_to"].(int))
}
if rl["source_ip"] != nil {
rule.SourceIp = rl["source_ip"].(string)
}
rules = append(rules, rule)
}
}
return rules
}
func difference(oldV, newV []interface{}) (toreturn []interface{}) {
var (
lenMin int
longest []interface{}
)
// Determine the shortest length and the longest slice
if len(oldV) < len(newV) {
lenMin = len(oldV)
longest = newV
} else {
lenMin = len(newV)
longest = oldV
}
// compare common indeces
for i := 0; i < lenMin; i++ {
if oldV[i] == nil || newV[i] == nil {
continue
}
if oldV[i].(map[string]interface{})["id"] != newV[i].(map[string]interface{})["id"] {
toreturn = append(toreturn, newV) //out += fmt.Sprintf("=>\t%s\t%s\n", oldV[i], newV[i])
}
}
// add indeces not in common
for _, v := range longest[lenMin:] {
//out += fmt.Sprintf("=>\t%s\n", v)
toreturn = append(toreturn, v)
}
return toreturn
}

View File

@ -0,0 +1,178 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandoneFirewall_Basic(t *testing.T) {
var firewall oneandone.FirewallPolicy
name := "test"
name_updated := "test1"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandoneFirewallDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneFirewall_basic, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneFirewallExists("oneandone_firewall_policy.fw", &firewall),
testAccCheckOneandoneFirewallAttributes("oneandone_firewall_policy.fw", name),
resource.TestCheckResourceAttr("oneandone_firewall_policy.fw", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneFirewall_update, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneFirewallExists("oneandone_firewall_policy.fw", &firewall),
testAccCheckOneandoneFirewallAttributes("oneandone_firewall_policy.fw", name_updated),
resource.TestCheckResourceAttr("oneandone_firewall_policy.fw", "name", name_updated),
),
},
},
})
}
func testAccCheckDOneandoneFirewallDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_firewall_policy.fw" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetFirewallPolicy(rs.Primary.ID)
if err == nil {
return fmt.Errorf("Firewall Policy still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandoneFirewallAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandoneFirewallExists(n string, fw_p *oneandone.FirewallPolicy) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_fw, err := api.GetFirewallPolicy(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching Firewall Policy: %s", rs.Primary.ID)
}
if found_fw.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
fw_p = found_fw
return nil
}
}
const testAccCheckOneandoneFirewall_basic = `
resource "oneandone_firewall_policy" "fw" {
name = "%s"
rules = [
{
"protocol" = "TCP"
"port_from" = 80
"port_to" = 80
"source_ip" = "0.0.0.0"
},
{
"protocol" = "ICMP"
"source_ip" = "0.0.0.0"
},
{
"protocol" = "TCP"
"port_from" = 43
"port_to" = 43
"source_ip" = "0.0.0.0"
},
{
"protocol" = "TCP"
"port_from" = 22
"port_to" = 22
"source_ip" = "0.0.0.0"
}
]
}`
const testAccCheckOneandoneFirewall_update = `
resource "oneandone_firewall_policy" "fw" {
name = "%s"
rules = [
{
"protocol" = "TCP"
"port_from" = 80
"port_to" = 80
"source_ip" = "0.0.0.0"
},
{
"protocol" = "ICMP"
"source_ip" = "0.0.0.0"
},
{
"protocol" = "TCP"
"port_from" = 43
"port_to" = 43
"source_ip" = "0.0.0.0"
},
{
"protocol" = "TCP"
"port_from" = 22
"port_to" = 22
"source_ip" = "0.0.0.0"
},
{
"protocol" = "TCP"
"port_from" = 88
"port_to" = 88
"source_ip" = "0.0.0.0"
},
]
}`

View File

@ -0,0 +1,370 @@
package oneandone
import (
"fmt"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
"log"
"strings"
)
func resourceOneandOneLoadbalancer() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOneLoadbalancerCreate,
Read: resourceOneandOneLoadbalancerRead,
Update: resourceOneandOneLoadbalancerUpdate,
Delete: resourceOneandOneLoadbalancerDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"method": {
Type: schema.TypeString,
Required: true,
ValidateFunc: validateMethod,
},
"datacenter": {
Type: schema.TypeString,
Optional: true,
},
"persistence": {
Type: schema.TypeBool,
Optional: true,
},
"persistence_time": {
Type: schema.TypeInt,
Optional: true,
},
"health_check_test": {
Type: schema.TypeString,
Optional: true,
},
"health_check_interval": {
Type: schema.TypeInt,
Optional: true,
},
"health_check_path": {
Type: schema.TypeString,
Optional: true,
},
"health_check_path_parser": {
Type: schema.TypeString,
Optional: true,
},
"rules": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"protocol": {
Type: schema.TypeString,
Required: true,
},
"port_balancer": {
Type: schema.TypeInt,
Required: true,
ValidateFunc: validation.IntBetween(1, 65535),
},
"port_server": {
Type: schema.TypeInt,
Required: true,
ValidateFunc: validation.IntBetween(1, 65535),
},
"source_ip": {
Type: schema.TypeString,
Required: true,
},
"id": {
Type: schema.TypeString,
Computed: true,
},
},
},
Required: true,
},
},
}
}
func resourceOneandOneLoadbalancerCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
req := oneandone.LoadBalancerRequest{
Name: d.Get("name").(string),
Rules: getLBRules(d),
}
if raw, ok := d.GetOk("description"); ok {
req.Description = raw.(string)
}
if raw, ok := d.GetOk("datacenter"); ok {
dcs, err := config.API.ListDatacenters()
if err != nil {
return fmt.Errorf("An error occured while fetching list of datacenters %s", err)
}
decenter := raw.(string)
for _, dc := range dcs {
if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) {
req.DatacenterId = dc.Id
break
}
}
}
if raw, ok := d.GetOk("method"); ok {
req.Method = raw.(string)
}
if raw, ok := d.GetOk("persistence"); ok {
req.Persistence = oneandone.Bool2Pointer(raw.(bool))
}
if raw, ok := d.GetOk("persistence_time"); ok {
req.PersistenceTime = oneandone.Int2Pointer(raw.(int))
}
if raw, ok := d.GetOk("health_check_test"); ok {
req.HealthCheckTest = raw.(string)
}
if raw, ok := d.GetOk("health_check_interval"); ok {
req.HealthCheckInterval = oneandone.Int2Pointer(raw.(int))
}
if raw, ok := d.GetOk("health_check_path"); ok {
req.HealthCheckPath = raw.(string)
}
if raw, ok := d.GetOk("health_check_path_parser"); ok {
req.HealthCheckPathParser = raw.(string)
}
lb_id, lb, err := config.API.CreateLoadBalancer(&req)
if err != nil {
return err
}
err = config.API.WaitForState(lb, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
d.SetId(lb_id)
return resourceOneandOneLoadbalancerRead(d, meta)
}
func getLBRules(d *schema.ResourceData) []oneandone.LoadBalancerRule {
var rules []oneandone.LoadBalancerRule
if raw, ok := d.GetOk("rules"); ok {
rawRules := raw.([]interface{})
log.Println("[DEBUG] raw rules:", raw)
for _, raw := range rawRules {
rl := raw.(map[string]interface{})
rule := oneandone.LoadBalancerRule{
Protocol: rl["protocol"].(string),
}
if rl["port_balancer"] != nil {
rule.PortBalancer = uint16(rl["port_balancer"].(int))
}
if rl["port_server"] != nil {
rule.PortServer = uint16(rl["port_server"].(int))
}
if rl["source_ip"] != nil {
rule.Source = rl["source_ip"].(string)
}
rules = append(rules, rule)
}
}
return rules
}
func resourceOneandOneLoadbalancerUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("name") || d.HasChange("description") || d.HasChange("method") || d.HasChange("persistence") || d.HasChange("persistence_time") || d.HasChange("health_check_test") || d.HasChange("health_check_interval") {
lb := oneandone.LoadBalancerRequest{}
if d.HasChange("name") {
_, n := d.GetChange("name")
lb.Name = n.(string)
}
if d.HasChange("description") {
_, n := d.GetChange("description")
lb.Description = n.(string)
}
if d.HasChange("method") {
_, n := d.GetChange("method")
lb.Method = (n.(string))
}
if d.HasChange("persistence") {
_, n := d.GetChange("persistence")
lb.Persistence = oneandone.Bool2Pointer(n.(bool))
}
if d.HasChange("persistence_time") {
_, n := d.GetChange("persistence_time")
lb.PersistenceTime = oneandone.Int2Pointer(n.(int))
}
if d.HasChange("health_check_test") {
_, n := d.GetChange("health_check_test")
lb.HealthCheckTest = n.(string)
}
if d.HasChange("health_check_path") {
_, n := d.GetChange("health_check_path")
lb.HealthCheckPath = n.(string)
}
if d.HasChange("health_check_path_parser") {
_, n := d.GetChange("health_check_path_parser")
lb.HealthCheckPathParser = n.(string)
}
ss, err := config.API.UpdateLoadBalancer(d.Id(), &lb)
if err != nil {
return err
}
err = config.API.WaitForState(ss, "ACTIVE", 10, 30)
if err != nil {
return err
}
}
if d.HasChange("rules") {
oldR, newR := d.GetChange("rules")
oldValues := oldR.([]interface{})
newValues := newR.([]interface{})
if len(oldValues) > len(newValues) {
diff := difference(oldValues, newValues)
for _, old := range diff {
o := old.(map[string]interface{})
if o["id"] != nil {
old_id := o["id"].(string)
fw, err := config.API.DeleteLoadBalancerRule(d.Id(), old_id)
if err != nil {
return err
}
err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
}
} else {
var rules []oneandone.LoadBalancerRule
log.Println("[DEBUG] new values:", newValues)
for _, raw := range newValues {
rl := raw.(map[string]interface{})
log.Println("[DEBUG] rl:", rl)
if rl["id"].(string) == "" {
rule := oneandone.LoadBalancerRule{
Protocol: rl["protocol"].(string),
}
rule.PortServer = uint16(rl["port_server"].(int))
rule.PortBalancer = uint16(rl["port_balancer"].(int))
rule.Source = rl["source_ip"].(string)
log.Println("[DEBUG] adding to list", rl["protocol"], rl["source_ip"], rl["port_balancer"], rl["port_server"])
log.Println("[DEBUG] adding to list", rule)
rules = append(rules, rule)
}
}
log.Println("[DEBUG] new rules:", rules)
if len(rules) > 0 {
fw, err := config.API.AddLoadBalancerRules(d.Id(), rules)
if err != nil {
return err
}
err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries)
}
}
}
return resourceOneandOneLoadbalancerRead(d, meta)
}
func resourceOneandOneLoadbalancerRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
ss, err := config.API.GetLoadBalancer(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
d.Set("name", ss.Name)
d.Set("description", ss.Description)
d.Set("datacenter", ss.Datacenter.CountryCode)
d.Set("method", ss.Method)
d.Set("persistence", ss.Persistence)
d.Set("persistence_time", ss.PersistenceTime)
d.Set("health_check_test", ss.HealthCheckTest)
d.Set("health_check_interval", ss.HealthCheckInterval)
d.Set("rules", getLoadbalancerRules(ss.Rules))
return nil
}
func getLoadbalancerRules(rules []oneandone.LoadBalancerRule) []map[string]interface{} {
raw := make([]map[string]interface{}, 0, len(rules))
for _, rule := range rules {
toadd := map[string]interface{}{
"id": rule.Id,
"port_balancer": rule.PortBalancer,
"port_server": rule.PortServer,
"protocol": rule.Protocol,
"source_ip": rule.Source,
}
raw = append(raw, toadd)
}
return raw
}
func resourceOneandOneLoadbalancerDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
lb, err := config.API.DeleteLoadBalancer(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(lb)
if err != nil {
return err
}
return nil
}
func validateMethod(v interface{}, k string) (ws []string, errors []error) {
value := v.(string)
if value != "ROUND_ROBIN" && value != "LEAST_CONNECTIONS" {
errors = append(errors, fmt.Errorf("%q value sholud be either 'ROUND_ROBIN' or 'LEAST_CONNECTIONS' not %q", k, value))
}
return
}

View File

@ -0,0 +1,156 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandoneLoadbalancer_Basic(t *testing.T) {
var lb oneandone.LoadBalancer
name := "test_loadbalancer"
name_updated := "test_loadbalancer_renamed"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandoneLoadbalancerDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneLoadbalancer_basic, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneLoadbalancerExists("oneandone_loadbalancer.lb", &lb),
testAccCheckOneandoneLoadbalancerAttributes("oneandone_loadbalancer.lb", name),
resource.TestCheckResourceAttr("oneandone_loadbalancer.lb", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneLoadbalancer_update, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneLoadbalancerExists("oneandone_loadbalancer.lb", &lb),
testAccCheckOneandoneLoadbalancerAttributes("oneandone_loadbalancer.lb", name_updated),
resource.TestCheckResourceAttr("oneandone_loadbalancer.lb", "name", name_updated),
),
},
},
})
}
func testAccCheckDOneandoneLoadbalancerDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_loadbalancer.lb" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetLoadBalancer(rs.Primary.ID)
if err == nil {
return fmt.Errorf("Loadbalancer still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandoneLoadbalancerAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandoneLoadbalancerExists(n string, fw_p *oneandone.LoadBalancer) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_fw, err := api.GetLoadBalancer(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching Loadbalancer: %s", rs.Primary.ID)
}
if found_fw.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
fw_p = found_fw
return nil
}
}
const testAccCheckOneandoneLoadbalancer_basic = `
resource "oneandone_loadbalancer" "lb" {
name = "%s"
method = "ROUND_ROBIN"
persistence = true
persistence_time = 60
health_check_test = "TCP"
health_check_interval = 300
datacenter = "US"
rules = [
{
protocol = "TCP"
port_balancer = 8080
port_server = 8089
source_ip = "0.0.0.0"
},
{
protocol = "TCP"
port_balancer = 9090
port_server = 9099
source_ip = "0.0.0.0"
}
]
}`
const testAccCheckOneandoneLoadbalancer_update = `
resource "oneandone_loadbalancer" "lb" {
name = "%s"
method = "ROUND_ROBIN"
persistence = true
persistence_time = 60
health_check_test = "TCP"
health_check_interval = 300
datacenter = "US"
rules = [
{
protocol = "TCP"
port_balancer = 8080
port_server = 8089
source_ip = "0.0.0.0"
}
]
}`

View File

@ -0,0 +1,706 @@
package oneandone
import (
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"strings"
)
func resourceOneandOneMonitoringPolicy() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOneMonitoringPolicyCreate,
Read: resourceOneandOneMonitoringPolicyRead,
Update: resourceOneandOneMonitoringPolicyUpdate,
Delete: resourceOneandOneMonitoringPolicyDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"email": {
Type: schema.TypeString,
Optional: true,
},
"agent": {
Type: schema.TypeBool,
Required: true,
},
"thresholds": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"cpu": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"warning": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
"critical": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
},
},
Required: true,
},
"ram": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"warning": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
"critical": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
},
},
Required: true,
},
"disk": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"warning": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
"critical": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
},
},
Required: true,
},
"transfer": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"warning": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
"critical": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
},
},
Required: true,
},
"internal_ping": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"warning": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
"critical": {
Type: schema.TypeSet,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"value": {
Type: schema.TypeInt,
Required: true,
},
"alert": {
Type: schema.TypeBool,
Required: true,
},
},
},
Required: true,
},
},
},
Required: true,
},
},
},
Required: true,
},
"ports": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"email_notification": {
Type: schema.TypeBool,
Required: true,
},
"port": {
Type: schema.TypeInt,
Required: true,
},
"protocol": {
Type: schema.TypeString,
Optional: true,
},
"alert_if": {
Type: schema.TypeString,
Optional: true,
},
"id": {
Type: schema.TypeString,
Computed: true,
},
},
},
Optional: true,
},
"processes": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"email_notification": {
Type: schema.TypeBool,
Required: true,
},
"process": {
Type: schema.TypeString,
Required: true,
},
"alert_if": {
Type: schema.TypeString,
Optional: true,
},
"id": {
Type: schema.TypeString,
Computed: true,
},
},
},
Optional: true,
},
},
}
}
func resourceOneandOneMonitoringPolicyCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
mp_request := oneandone.MonitoringPolicy{
Name: d.Get("name").(string),
Agent: d.Get("agent").(bool),
Thresholds: getThresholds(d.Get("thresholds")),
}
if raw, ok := d.GetOk("ports"); ok {
mp_request.Ports = getPorts(raw)
}
if raw, ok := d.GetOk("processes"); ok {
mp_request.Processes = getProcesses(raw)
}
mp_id, mp, err := config.API.CreateMonitoringPolicy(&mp_request)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
d.SetId(mp_id)
return resourceOneandOneMonitoringPolicyRead(d, meta)
}
func resourceOneandOneMonitoringPolicyUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
req := oneandone.MonitoringPolicy{}
if d.HasChange("name") {
_, n := d.GetChange("name")
req.Name = n.(string)
}
if d.HasChange("description") {
_, n := d.GetChange("description")
req.Description = n.(string)
}
if d.HasChange("email") {
_, n := d.GetChange("email")
req.Email = n.(string)
}
if d.HasChange("agent") {
_, n := d.GetChange("agent")
req.Agent = n.(bool)
}
if d.HasChange("thresholds") {
_, n := d.GetChange("thresholds")
req.Thresholds = getThresholds(n)
}
mp, err := config.API.UpdateMonitoringPolicy(d.Id(), &req)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
if d.HasChange("ports") {
o, n := d.GetChange("ports")
oldValues := o.([]interface{})
newValues := n.([]interface{})
if len(newValues) > len(oldValues) {
ports := getPorts(newValues)
newports := []oneandone.MonitoringPort{}
for _, p := range ports {
if p.Id == "" {
newports = append(newports, p)
}
}
mp, err := config.API.AddMonitoringPolicyPorts(d.Id(), newports)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
} else if len(oldValues) > len(newValues) {
diff := difference(oldValues, newValues)
ports := getPorts(diff)
for _, port := range ports {
if port.Id == "" {
continue
}
mp, err := config.API.DeleteMonitoringPolicyPort(d.Id(), port.Id)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
} else if len(oldValues) == len(newValues) {
ports := getPorts(newValues)
for _, port := range ports {
mp, err := config.API.ModifyMonitoringPolicyPort(d.Id(), port.Id, &port)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
}
}
if d.HasChange("processes") {
o, n := d.GetChange("processes")
oldValues := o.([]interface{})
newValues := n.([]interface{})
if len(newValues) > len(oldValues) {
processes := getProcesses(newValues)
newprocesses := []oneandone.MonitoringProcess{}
for _, p := range processes {
if p.Id == "" {
newprocesses = append(newprocesses, p)
}
}
mp, err := config.API.AddMonitoringPolicyProcesses(d.Id(), newprocesses)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
} else if len(oldValues) > len(newValues) {
diff := difference(oldValues, newValues)
processes := getProcesses(diff)
for _, process := range processes {
if process.Id == "" {
continue
}
mp, err := config.API.DeleteMonitoringPolicyProcess(d.Id(), process.Id)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
} else if len(oldValues) == len(newValues) {
processes := getProcesses(newValues)
for _, process := range processes {
mp, err := config.API.ModifyMonitoringPolicyProcess(d.Id(), process.Id, &process)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
}
}
return resourceOneandOneMonitoringPolicyRead(d, meta)
}
func resourceOneandOneMonitoringPolicyRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
mp, err := config.API.GetMonitoringPolicy(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
if len(mp.Servers) > 0 {
}
if len(mp.Ports) > 0 {
pports := d.Get("ports").([]interface{})
for i, raw_ports := range pports {
port := raw_ports.(map[string]interface{})
port["id"] = mp.Ports[i].Id
}
d.Set("ports", pports)
}
if len(mp.Processes) > 0 {
pprocesses := d.Get("processes").([]interface{})
for i, raw_processes := range pprocesses {
process := raw_processes.(map[string]interface{})
process["id"] = mp.Processes[i].Id
}
d.Set("processes", pprocesses)
}
return nil
}
func resourceOneandOneMonitoringPolicyDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
mp, err := config.API.DeleteMonitoringPolicy(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(mp)
if err != nil {
return err
}
return nil
}
func getThresholds(d interface{}) *oneandone.MonitoringThreshold {
raw_thresholds := d.(*schema.Set).List()
toReturn := &oneandone.MonitoringThreshold{}
for _, thresholds := range raw_thresholds {
th_set := thresholds.(map[string]interface{})
//CPU
cpu_raw := th_set["cpu"].(*schema.Set)
toReturn.Cpu = &oneandone.MonitoringLevel{}
for _, c := range cpu_raw.List() {
int_k := c.(map[string]interface{})
for _, w := range int_k["warning"].(*schema.Set).List() {
toReturn.Cpu.Warning = &oneandone.MonitoringValue{
Value: w.(map[string]interface{})["value"].(int),
Alert: w.(map[string]interface{})["alert"].(bool),
}
}
for _, c := range int_k["critical"].(*schema.Set).List() {
toReturn.Cpu.Critical = &oneandone.MonitoringValue{
Value: c.(map[string]interface{})["value"].(int),
Alert: c.(map[string]interface{})["alert"].(bool),
}
}
}
//RAM
ram_raw := th_set["ram"].(*schema.Set)
toReturn.Ram = &oneandone.MonitoringLevel{}
for _, c := range ram_raw.List() {
int_k := c.(map[string]interface{})
for _, w := range int_k["warning"].(*schema.Set).List() {
toReturn.Ram.Warning = &oneandone.MonitoringValue{
Value: w.(map[string]interface{})["value"].(int),
Alert: w.(map[string]interface{})["alert"].(bool),
}
}
for _, c := range int_k["critical"].(*schema.Set).List() {
toReturn.Ram.Critical = &oneandone.MonitoringValue{
Value: c.(map[string]interface{})["value"].(int),
Alert: c.(map[string]interface{})["alert"].(bool),
}
}
}
//DISK
disk_raw := th_set["disk"].(*schema.Set)
toReturn.Disk = &oneandone.MonitoringLevel{}
for _, c := range disk_raw.List() {
int_k := c.(map[string]interface{})
for _, w := range int_k["warning"].(*schema.Set).List() {
toReturn.Disk.Warning = &oneandone.MonitoringValue{
Value: w.(map[string]interface{})["value"].(int),
Alert: w.(map[string]interface{})["alert"].(bool),
}
}
for _, c := range int_k["critical"].(*schema.Set).List() {
toReturn.Disk.Critical = &oneandone.MonitoringValue{
Value: c.(map[string]interface{})["value"].(int),
Alert: c.(map[string]interface{})["alert"].(bool),
}
}
}
//TRANSFER
transfer_raw := th_set["transfer"].(*schema.Set)
toReturn.Transfer = &oneandone.MonitoringLevel{}
for _, c := range transfer_raw.List() {
int_k := c.(map[string]interface{})
for _, w := range int_k["warning"].(*schema.Set).List() {
toReturn.Transfer.Warning = &oneandone.MonitoringValue{
Value: w.(map[string]interface{})["value"].(int),
Alert: w.(map[string]interface{})["alert"].(bool),
}
}
for _, c := range int_k["critical"].(*schema.Set).List() {
toReturn.Transfer.Critical = &oneandone.MonitoringValue{
Value: c.(map[string]interface{})["value"].(int),
Alert: c.(map[string]interface{})["alert"].(bool),
}
}
}
//internal ping
ping_raw := th_set["internal_ping"].(*schema.Set)
toReturn.InternalPing = &oneandone.MonitoringLevel{}
for _, c := range ping_raw.List() {
int_k := c.(map[string]interface{})
for _, w := range int_k["warning"].(*schema.Set).List() {
toReturn.InternalPing.Warning = &oneandone.MonitoringValue{
Value: w.(map[string]interface{})["value"].(int),
Alert: w.(map[string]interface{})["alert"].(bool),
}
}
for _, c := range int_k["critical"].(*schema.Set).List() {
toReturn.InternalPing.Critical = &oneandone.MonitoringValue{
Value: c.(map[string]interface{})["value"].(int),
Alert: c.(map[string]interface{})["alert"].(bool),
}
}
}
}
return toReturn
}
func getProcesses(d interface{}) []oneandone.MonitoringProcess {
toReturn := []oneandone.MonitoringProcess{}
for _, raw := range d.([]interface{}) {
port := raw.(map[string]interface{})
m_port := oneandone.MonitoringProcess{
EmailNotification: port["email_notification"].(bool),
}
if port["id"] != nil {
m_port.Id = port["id"].(string)
}
if port["process"] != nil {
m_port.Process = port["process"].(string)
}
if port["alert_if"] != nil {
m_port.AlertIf = port["alert_if"].(string)
}
toReturn = append(toReturn, m_port)
}
return toReturn
}
func getPorts(d interface{}) []oneandone.MonitoringPort {
toReturn := []oneandone.MonitoringPort{}
for _, raw := range d.([]interface{}) {
port := raw.(map[string]interface{})
m_port := oneandone.MonitoringPort{
EmailNotification: port["email_notification"].(bool),
Port: port["port"].(int),
}
if port["id"] != nil {
m_port.Id = port["id"].(string)
}
if port["protocol"] != nil {
m_port.Protocol = port["protocol"].(string)
}
if port["alert_if"] != nil {
m_port.AlertIf = port["alert_if"].(string)
}
toReturn = append(toReturn, m_port)
}
return toReturn
}

View File

@ -0,0 +1,212 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandoneMonitoringPolicy_Basic(t *testing.T) {
var mp oneandone.MonitoringPolicy
name := "test"
name_updated := "test1"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandoneMonitoringPolicyDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneMonitoringPolicy_basic, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneMonitoringPolicyExists("oneandone_monitoring_policy.mp", &mp),
testAccCheckOneandoneMonitoringPolicyAttributes("oneandone_monitoring_policy.mp", name),
resource.TestCheckResourceAttr("oneandone_monitoring_policy.mp", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneMonitoringPolicy_basic, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneMonitoringPolicyExists("oneandone_monitoring_policy.mp", &mp),
testAccCheckOneandoneMonitoringPolicyAttributes("oneandone_monitoring_policy.mp", name_updated),
resource.TestCheckResourceAttr("oneandone_monitoring_policy.mp", "name", name_updated),
),
},
},
})
}
func testAccCheckDOneandoneMonitoringPolicyDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_monitoring_policy.mp" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetMonitoringPolicy(rs.Primary.ID)
if err == nil {
return fmt.Errorf("MonitoringPolicy still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandoneMonitoringPolicyAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandoneMonitoringPolicyExists(n string, fw_p *oneandone.MonitoringPolicy) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_fw, err := api.GetMonitoringPolicy(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching MonitoringPolicy: %s", rs.Primary.ID)
}
if found_fw.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
fw_p = found_fw
return nil
}
}
const testAccCheckOneandoneMonitoringPolicy_basic = `
resource "oneandone_monitoring_policy" "mp" {
name = "%s"
agent = true
email = "email@address.com"
thresholds = {
cpu = {
warning = {
value = 50,
alert = false
}
critical = {
value = 66,
alert = false
}
}
ram = {
warning = {
value = 70,
alert = true
}
critical = {
value = 80,
alert = true
}
},
ram = {
warning = {
value = 85,
alert = true
}
critical = {
value = 95,
alert = true
}
},
disk = {
warning = {
value = 84,
alert = true
}
critical = {
value = 94,
alert = true
}
},
transfer = {
warning = {
value = 1000,
alert = true
}
critical = {
value = 2000,
alert = true
}
},
internal_ping = {
warning = {
value = 3000,
alert = true
}
critical = {
value = 4000,
alert = true
}
}
}
ports = [
{
email_notification = true
port = 443
protocol = "TCP"
alert_if = "NOT_RESPONDING"
},
{
email_notification = false
port = 80
protocol = "TCP"
alert_if = "NOT_RESPONDING"
},
{
email_notification = true
port = 21
protocol = "TCP"
alert_if = "NOT_RESPONDING"
}
]
processes = [
{
email_notification = false
process = "httpdeamon"
alert_if = "RUNNING"
},
{
process = "iexplorer",
alert_if = "NOT_RUNNING"
email_notification = true
}]
}`

View File

@ -0,0 +1,291 @@
package oneandone
import (
"fmt"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"strings"
)
func resourceOneandOnePrivateNetwork() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOnePrivateNetworkCreate,
Read: resourceOneandOnePrivateNetworkRead,
Update: resourceOneandOnePrivateNetworkUpdate,
Delete: resourceOneandOnePrivateNetworkDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"datacenter": {
Type: schema.TypeString,
Optional: true,
},
"network_address": {
Type: schema.TypeString,
Optional: true,
},
"subnet_mask": {
Type: schema.TypeString,
Optional: true,
},
"server_ids": {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Optional: true,
},
},
}
}
func resourceOneandOnePrivateNetworkCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
req := oneandone.PrivateNetworkRequest{
Name: d.Get("name").(string),
}
if raw, ok := d.GetOk("description"); ok {
req.Description = raw.(string)
}
if raw, ok := d.GetOk("network_address"); ok {
req.NetworkAddress = raw.(string)
}
if raw, ok := d.GetOk("subnet_mask"); ok {
req.SubnetMask = raw.(string)
}
if raw, ok := d.GetOk("datacenter"); ok {
dcs, err := config.API.ListDatacenters()
if err != nil {
return fmt.Errorf("An error occured while fetching list of datacenters %s", err)
}
decenter := raw.(string)
for _, dc := range dcs {
if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) {
req.DatacenterId = dc.Id
break
}
}
}
prn_id, prn, err := config.API.CreatePrivateNetwork(&req)
if err != nil {
return err
}
err = config.API.WaitForState(prn, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
d.SetId(prn_id)
var ids []string
if raw, ok := d.GetOk("server_ids"); ok {
rawIps := raw.(*schema.Set).List()
for _, raw := range rawIps {
ids = append(ids, raw.(string))
server, err := config.API.ShutdownServer(raw.(string), false)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_OFF", 10, config.Retries)
if err != nil {
return err
}
}
}
prn, err = config.API.AttachPrivateNetworkServers(d.Id(), ids)
if err != nil {
return err
}
err = config.API.WaitForState(prn, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
for _, id := range ids {
server, err := config.API.StartServer(id)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries)
if err != nil {
return err
}
}
return resourceOneandOnePrivateNetworkRead(d, meta)
}
func resourceOneandOnePrivateNetworkUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("name") || d.HasChange("description") || d.HasChange("network_address") || d.HasChange("subnet_mask") {
pnset := oneandone.PrivateNetworkRequest{}
pnset.Name = d.Get("name").(string)
pnset.Description = d.Get("description").(string)
pnset.NetworkAddress = d.Get("network_address").(string)
pnset.SubnetMask = d.Get("subnet_mask").(string)
prn, err := config.API.UpdatePrivateNetwork(d.Id(), &pnset)
if err != nil {
return err
}
err = config.API.WaitForState(prn, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
if d.HasChange("server_ids") {
o, n := d.GetChange("server_ids")
newValues := n.(*schema.Set).List()
oldValues := o.(*schema.Set).List()
var ids []string
for _, newV := range oldValues {
ids = append(ids, newV.(string))
}
for _, id := range ids {
server, err := config.API.ShutdownServer(id, false)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_OFF", 10, config.Retries)
if err != nil {
return err
}
_, err = config.API.RemoveServerPrivateNetwork(id, d.Id())
if err != nil {
return err
}
prn, _ := config.API.GetPrivateNetwork(d.Id())
err = config.API.WaitForState(prn, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
var newids []string
for _, newV := range newValues {
newids = append(newids, newV.(string))
}
pn, err := config.API.AttachPrivateNetworkServers(d.Id(), newids)
if err != nil {
return err
}
err = config.API.WaitForState(pn, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
for _, id := range newids {
server, err := config.API.StartServer(id)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries)
if err != nil {
return err
}
}
}
return resourceOneandOnePrivateNetworkRead(d, meta)
}
func resourceOneandOnePrivateNetworkRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
pn, err := config.API.GetPrivateNetwork(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
d.Set("name", pn.Name)
d.Set("description", pn.Description)
d.Set("network_address", pn.NetworkAddress)
d.Set("subnet_mask", pn.SubnetMask)
d.Set("datacenter", pn.Datacenter.CountryCode)
var toAdd []string
for _, s := range pn.Servers {
toAdd = append(toAdd, s.Id)
}
d.Set("server_ids", toAdd)
return nil
}
func resourceOneandOnePrivateNetworkDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
pn, err := config.API.GetPrivateNetwork(d.Id())
for _, server := range pn.Servers {
srv, err := config.API.ShutdownServer(server.Id, false)
if err != nil {
return err
}
err = config.API.WaitForState(srv, "POWERED_OFF", 10, config.Retries)
if err != nil {
return err
}
}
pn, err = config.API.DeletePrivateNetwork(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(pn)
if err != nil {
return err
}
for _, server := range pn.Servers {
srv, err := config.API.StartServer(server.Id)
if err != nil {
return err
}
err = config.API.WaitForState(srv, "POWERED_ON", 10, config.Retries)
if err != nil {
return err
}
}
return nil
}

View File

@ -0,0 +1,160 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandonePrivateNetwork_Basic(t *testing.T) {
var net oneandone.PrivateNetwork
name := "test"
name_updated := "test1"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckOneandonePrivateNetworkDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandonePrivateNetwork_basic, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandonePrivateNetworkExists("oneandone_private_network.pn", &net),
testAccCheckOneandonePrivateNetworkAttributes("oneandone_private_network.pn", name),
resource.TestCheckResourceAttr("oneandone_private_network.pn", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandonePrivateNetwork_basic, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandonePrivateNetworkExists("oneandone_private_network.pn", &net),
testAccCheckOneandonePrivateNetworkAttributes("oneandone_private_network.pn", name_updated),
resource.TestCheckResourceAttr("oneandone_private_network.pn", "name", name_updated),
),
},
},
})
}
func testAccCheckOneandonePrivateNetworkDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_private_network" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetPrivateNetwork(rs.Primary.ID)
if err == nil {
return fmt.Errorf("PrivateNetwork still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandonePrivateNetworkAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandonePrivateNetworkExists(n string, server *oneandone.PrivateNetwork) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_server, err := api.GetPrivateNetwork(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching PrivateNetwork: %s", rs.Primary.ID)
}
if found_server.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
server = found_server
return nil
}
}
const testAccCheckOneandonePrivateNetwork_basic = `
resource "oneandone_server" "server1" {
name = "server_private_net_01"
description = "ttt"
image = "CoreOS_Stable_64std"
datacenter = "US"
vcores = 1
cores_per_processor = 1
ram = 2
password = "Kv40kd8PQb"
hdds = [
{
disk_size = 60
is_main = true
}
]
}
resource "oneandone_server" "server2" {
name = "server_private_net_02"
description = "ttt"
image = "CoreOS_Stable_64std"
datacenter = "US"
vcores = 1
cores_per_processor = 1
ram = 2
password = "${oneandone_server.server1.password}"
hdds = [
{
disk_size = 60
is_main = true
}
]
}
resource "oneandone_private_network" "pn" {
name = "%s",
description = "new private net"
datacenter = "US"
network_address = "192.168.7.0"
subnet_mask = "255.255.255.0"
server_ids = [
"${oneandone_server.server1.id}",
"${oneandone_server.server2.id}"
]
}
`

View File

@ -0,0 +1,133 @@
package oneandone
import (
"fmt"
"github.com/hashicorp/terraform/helper/schema"
"strings"
)
func resourceOneandOnePublicIp() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOnePublicIpCreate,
Read: resourceOneandOnePublicIpRead,
Update: resourceOneandOnePublicIpUpdate,
Delete: resourceOneandOnePublicIpDelete,
Schema: map[string]*schema.Schema{
"ip_type": { //IPV4 or IPV6
Type: schema.TypeString,
Required: true,
},
"reverse_dns": {
Type: schema.TypeString,
Optional: true,
},
"datacenter": {
Type: schema.TypeString,
Optional: true,
},
"ip_address": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
func resourceOneandOnePublicIpCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
var reverse_dns string
var datacenter_id string
if raw, ok := d.GetOk("reverse_dns"); ok {
reverse_dns = raw.(string)
}
if raw, ok := d.GetOk("datacenter"); ok {
dcs, err := config.API.ListDatacenters()
if err != nil {
return fmt.Errorf("An error occured while fetching list of datacenters %s", err)
}
decenter := raw.(string)
for _, dc := range dcs {
if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) {
datacenter_id = dc.Id
break
}
}
}
ip_id, ip, err := config.API.CreatePublicIp(d.Get("ip_type").(string), reverse_dns, datacenter_id)
if err != nil {
return err
}
err = config.API.WaitForState(ip, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
d.SetId(ip_id)
return resourceOneandOnePublicIpRead(d, meta)
}
func resourceOneandOnePublicIpRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
ip, err := config.API.GetPublicIp(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
d.Set("ip_address", ip.IpAddress)
d.Set("revers_dns", ip.ReverseDns)
d.Set("datacenter", ip.Datacenter.CountryCode)
d.Set("ip_type", ip.Type)
return nil
}
func resourceOneandOnePublicIpUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("reverse_dns") {
_, n := d.GetChange("reverse_dns")
ip, err := config.API.UpdatePublicIp(d.Id(), n.(string))
if err != nil {
return err
}
err = config.API.WaitForState(ip, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
return resourceOneandOnePublicIpRead(d, meta)
}
func resourceOneandOnePublicIpDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
ip, err := config.API.DeletePublicIp(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(ip)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,119 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandonePublicIp_Basic(t *testing.T) {
var public_ip oneandone.PublicIp
reverse_dns := "example.de"
reverse_dns_updated := "example.ba"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandonePublicIpDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandonePublicIp_basic, reverse_dns),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandonePublicIpExists("oneandone_public_ip.ip", &public_ip),
testAccCheckOneandonePublicIpAttributes("oneandone_public_ip.ip", reverse_dns),
resource.TestCheckResourceAttr("oneandone_public_ip.ip", "reverse_dns", reverse_dns),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandonePublicIp_basic, reverse_dns_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandonePublicIpExists("oneandone_public_ip.ip", &public_ip),
testAccCheckOneandonePublicIpAttributes("oneandone_public_ip.ip", reverse_dns_updated),
resource.TestCheckResourceAttr("oneandone_public_ip.ip", "reverse_dns", reverse_dns_updated),
),
},
},
})
}
func testAccCheckDOneandonePublicIpDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_public_ip" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetPublicIp(rs.Primary.ID)
if err == nil {
return fmt.Errorf("Public IP still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandonePublicIpAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["reverse_dns"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandonePublicIpExists(n string, public_ip *oneandone.PublicIp) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_public_ip, err := api.GetPublicIp(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching public IP: %s", rs.Primary.ID)
}
if found_public_ip.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
public_ip = found_public_ip
return nil
}
}
const testAccCheckOneandonePublicIp_basic = `
resource "oneandone_public_ip" "ip" {
"ip_type" = "IPV4"
"reverse_dns" = "%s"
"datacenter" = "GB"
}`

View File

@ -0,0 +1,562 @@
package oneandone
import (
"crypto/x509"
"encoding/pem"
"fmt"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"golang.org/x/crypto/ssh"
"io/ioutil"
"log"
"strings"
"errors"
)
func resourceOneandOneServer() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOneServerCreate,
Read: resourceOneandOneServerRead,
Update: resourceOneandOneServerUpdate,
Delete: resourceOneandOneServerDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"image": {
Type: schema.TypeString,
Required: true,
},
"vcores": {
Type: schema.TypeInt,
Required: true,
},
"cores_per_processor": {
Type: schema.TypeInt,
Required: true,
},
"ram": {
Type: schema.TypeFloat,
Required: true,
},
"ssh_key_path": {
Type: schema.TypeString,
Optional: true,
},
"password": {
Type: schema.TypeString,
Optional: true,
Sensitive: true,
},
"datacenter": {
Type: schema.TypeString,
Optional: true,
},
"ip": {
Type: schema.TypeString,
Optional: true,
},
"ips": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"id": {
Type: schema.TypeString,
Computed: true,
},
"ip": {
Type: schema.TypeString,
Computed: true,
},
"firewall_policy_id": {
Type: schema.TypeString,
Optional: true,
},
},
},
Computed: true,
},
"hdds": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"id": {
Type: schema.TypeString,
Computed: true,
},
"disk_size": {
Type: schema.TypeInt,
Required: true,
},
"is_main": {
Type: schema.TypeBool,
Optional: true,
},
},
},
Required: true,
},
"firewall_policy_id": {
Type: schema.TypeString,
Optional: true,
},
"monitoring_policy_id": {
Type: schema.TypeString,
Optional: true,
},
"loadbalancer_id": {
Type: schema.TypeString,
Optional: true,
},
},
}
}
func resourceOneandOneServerCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
saps, _ := config.API.ListServerAppliances()
var sa oneandone.ServerAppliance
for _, a := range saps {
if a.Type == "IMAGE" && strings.Contains(strings.ToLower(a.Name), strings.ToLower(d.Get("image").(string))) {
sa = a
break
}
}
var hdds []oneandone.Hdd
if raw, ok := d.GetOk("hdds"); ok {
rawhdds := raw.([]interface{})
var istheremain bool
for _, raw := range rawhdds {
hd := raw.(map[string]interface{})
hdd := oneandone.Hdd{
Size: hd["disk_size"].(int),
IsMain: hd["is_main"].(bool),
}
if hdd.IsMain {
if hdd.Size < sa.MinHddSize {
return fmt.Errorf(fmt.Sprintf("Minimum required disk size %d", sa.MinHddSize))
}
istheremain = true
}
hdds = append(hdds, hdd)
}
if !istheremain {
return fmt.Errorf("At least one HDD has to be %s", "`is_main`")
}
}
req := oneandone.ServerRequest{
Name: d.Get("name").(string),
Description: d.Get("description").(string),
ApplianceId: sa.Id,
PowerOn: true,
Hardware: oneandone.Hardware{
Vcores: d.Get("vcores").(int),
CoresPerProcessor: d.Get("cores_per_processor").(int),
Ram: float32(d.Get("ram").(float64)),
Hdds: hdds,
},
}
if raw, ok := d.GetOk("ip"); ok {
new_ip := raw.(string)
ips, err := config.API.ListPublicIps()
if err != nil {
return err
}
for _, ip := range ips {
if ip.IpAddress == new_ip {
req.IpId = ip.Id
break
}
}
log.Println("[DEBUG] req.IP", req.IpId)
}
if raw, ok := d.GetOk("datacenter"); ok {
dcs, err := config.API.ListDatacenters()
if err != nil {
return fmt.Errorf("An error occured while fetching list of datacenters %s", err)
}
decenter := raw.(string)
for _, dc := range dcs {
if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) {
req.DatacenterId = dc.Id
break
}
}
}
if fwp_id, ok := d.GetOk("firewall_policy_id"); ok {
req.FirewallPolicyId = fwp_id.(string)
}
if mp_id, ok := d.GetOk("monitoring_policy_id"); ok {
req.MonitoringPolicyId = mp_id.(string)
}
if mp_id, ok := d.GetOk("loadbalancer_id"); ok {
req.LoadBalancerId = mp_id.(string)
}
var privateKey string
if raw, ok := d.GetOk("ssh_key_path"); ok {
rawpath := raw.(string)
priv, publicKey, err := getSshKey(rawpath)
privateKey = priv
if err != nil {
return err
}
req.SSHKey = publicKey
}
var password string
if raw, ok := d.GetOk("password"); ok {
req.Password = raw.(string)
password = req.Password
}
server_id, server, err := config.API.CreateServer(&req)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries)
d.SetId(server_id)
server, err = config.API.GetServer(d.Id())
if err != nil {
return err
}
if password == "" {
password = server.FirstPassword
}
d.SetConnInfo(map[string]string{
"type": "ssh",
"host": server.Ips[0].Ip,
"password": password,
"private_key": privateKey,
})
return resourceOneandOneServerRead(d, meta)
}
func resourceOneandOneServerRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
server, err := config.API.GetServer(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
d.Set("name", server.Name)
d.Set("datacenter", server.Datacenter.CountryCode)
d.Set("hdds", readHdds(server.Hardware))
d.Set("ips", readIps(server.Ips))
if len(server.FirstPassword) > 0 {
d.Set("password", server.FirstPassword)
}
return nil
}
func resourceOneandOneServerUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("name") || d.HasChange("description") {
_, name := d.GetChange("name")
_, description := d.GetChange("description")
server, err := config.API.RenameServer(d.Id(), name.(string), description.(string))
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries)
}
if d.HasChange("hdds") {
oldV, newV := d.GetChange("hdds")
newValues := newV.([]interface{})
oldValues := oldV.([]interface{})
if len(oldValues) > len(newValues) {
diff := difference(oldValues, newValues)
for _, old := range diff {
o := old.(map[string]interface{})
old_id := o["id"].(string)
server, err := config.API.DeleteServerHdd(d.Id(), old_id)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries)
if err != nil {
return err
}
}
} else {
for _, newHdd := range newValues {
n := newHdd.(map[string]interface{})
//old := oldHdd.(map[string]interface{})
if n["id"].(string) == "" {
hdds := oneandone.ServerHdds{
Hdds: []oneandone.Hdd{
{
Size: n["disk_size"].(int),
IsMain: n["is_main"].(bool),
},
},
}
server, err := config.API.AddServerHdds(d.Id(), &hdds)
if err != nil {
return err
}
err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries)
if err != nil {
return err
}
} else {
id := n["id"].(string)
isMain := n["is_main"].(bool)
if id != "" && !isMain {
log.Println("[DEBUG] Resizing existing HDD")
config.API.ResizeServerHdd(d.Id(), id, n["disk_size"].(int))
}
}
}
}
}
if d.HasChange("monitoring_policy_id") {
o, n := d.GetChange("monitoring_policy_id")
if n == nil {
mp, err := config.API.RemoveMonitoringPolicyServer(o.(string), d.Id())
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
} else {
mp, err := config.API.AttachMonitoringPolicyServers(n.(string), []string{d.Id()})
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
}
if d.HasChange("loadbalancer_id") {
o, n := d.GetChange("loadbalancer_id")
server, err := config.API.GetServer(d.Id())
if err != nil {
return err
}
if n == nil || n.(string) == "" {
log.Println("[DEBUG] Removing")
log.Println("[DEBUG] IPS:", server.Ips)
for _, ip := range server.Ips {
mp, err := config.API.DeleteLoadBalancerServerIp(o.(string), ip.Id)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
} else {
log.Println("[DEBUG] Adding")
ip_ids := []string{}
for _, ip := range server.Ips {
ip_ids = append(ip_ids, ip.Id)
}
mp, err := config.API.AddLoadBalancerServerIps(n.(string), ip_ids)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
}
if d.HasChange("firewall_policy_id") {
server, err := config.API.GetServer(d.Id())
if err != nil {
return err
}
o, n := d.GetChange("firewall_policy_id")
if n == nil {
for _, ip := range server.Ips {
mp, err := config.API.DeleteFirewallPolicyServerIp(o.(string), ip.Id)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
} else {
ip_ids := []string{}
for _, ip := range server.Ips {
ip_ids = append(ip_ids, ip.Id)
}
mp, err := config.API.AddFirewallPolicyServerIps(n.(string), ip_ids)
if err != nil {
return err
}
err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries)
if err != nil {
return err
}
}
}
return resourceOneandOneServerRead(d, meta)
}
func resourceOneandOneServerDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
_, ok := d.GetOk("ip")
server, err := config.API.DeleteServer(d.Id(), ok)
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(server)
if err != nil {
log.Println("[DEBUG] ************ ERROR While waiting ************")
return err
}
return nil
}
func readHdds(hardware *oneandone.Hardware) []map[string]interface{} {
hdds := make([]map[string]interface{}, 0, len(hardware.Hdds))
for _, hd := range hardware.Hdds {
hdds = append(hdds, map[string]interface{}{
"id": hd.Id,
"disk_size": hd.Size,
"is_main": hd.IsMain,
})
}
return hdds
}
func readIps(ips []oneandone.ServerIp) []map[string]interface{} {
raw := make([]map[string]interface{}, 0, len(ips))
for _, ip := range ips {
toadd := map[string]interface{}{
"ip": ip.Ip,
"id": ip.Id,
}
if ip.Firewall != nil {
toadd["firewall_policy_id"] = ip.Firewall.Id
}
raw = append(raw, toadd)
}
return raw
}
func getSshKey(path string) (privatekey string, publickey string, err error) {
pemBytes, err := ioutil.ReadFile(path)
if err != nil {
return "", "", err
}
block, _ := pem.Decode(pemBytes)
if block == nil {
return "", "", errors.New("File " + path + " contains nothing")
}
priv, err := x509.ParsePKCS1PrivateKey(block.Bytes)
if err != nil {
return "", "", err
}
priv_blk := pem.Block{
Type: "RSA PRIVATE KEY",
Headers: nil,
Bytes: x509.MarshalPKCS1PrivateKey(priv),
}
pub, err := ssh.NewPublicKey(&priv.PublicKey)
if err != nil {
return "", "", err
}
publickey = string(ssh.MarshalAuthorizedKey(pub))
privatekey = string(pem.EncodeToMemory(&priv_blk))
return privatekey, publickey, nil
}

View File

@ -0,0 +1,130 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandoneServer_Basic(t *testing.T) {
var server oneandone.Server
name := "test_server"
name_updated := "test_server_renamed"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandoneServerDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneServer_basic, name, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneServerExists("oneandone_server.server", &server),
testAccCheckOneandoneServerAttributes("oneandone_server.server", name),
resource.TestCheckResourceAttr("oneandone_server.server", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneServer_basic, name_updated, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneServerExists("oneandone_server.server", &server),
testAccCheckOneandoneServerAttributes("oneandone_server.server", name_updated),
resource.TestCheckResourceAttr("oneandone_server.server", "name", name_updated),
),
},
},
})
}
func testAccCheckDOneandoneServerDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_server" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetServer(rs.Primary.ID)
if err == nil {
return fmt.Errorf("Server still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandoneServerAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandoneServerExists(n string, server *oneandone.Server) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_server, err := api.GetServer(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching Server: %s", rs.Primary.ID)
}
if found_server.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
server = found_server
return nil
}
}
const testAccCheckOneandoneServer_basic = `
resource "oneandone_server" "server" {
name = "%s"
description = "%s"
image = "ubuntu"
datacenter = "GB"
vcores = 1
cores_per_processor = 1
ram = 2
password = "Kv40kd8PQb"
hdds = [
{
disk_size = 20
is_main = true
}
]
}`

View File

@ -0,0 +1,217 @@
package oneandone
import (
"crypto/md5"
"encoding/base64"
"fmt"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"io"
"os"
fp "path/filepath"
"strings"
)
func resourceOneandOneVPN() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOneVPNCreate,
Read: resourceOneandOneVPNRead,
Update: resourceOneandOneVPNUpdate,
Delete: resourceOneandOneVPNDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"download_path": {
Type: schema.TypeString,
Computed: true,
},
"datacenter": {
Type: schema.TypeString,
Optional: true,
},
"file_name": {
Type: schema.TypeString,
Computed: true,
},
},
}
}
func resourceOneandOneVPNCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
var datacenter string
if raw, ok := d.GetOk("datacenter"); ok {
dcs, err := config.API.ListDatacenters()
if err != nil {
return fmt.Errorf("An error occured while fetching list of datacenters %s", err)
}
decenter := raw.(string)
for _, dc := range dcs {
if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) {
datacenter = dc.Id
break
}
}
}
var description string
if raw, ok := d.GetOk("description"); ok {
description = raw.(string)
}
vpn_id, vpn, err := config.API.CreateVPN(d.Get("name").(string), description, datacenter)
if err != nil {
return err
}
err = config.API.WaitForState(vpn, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
d.SetId(vpn_id)
return resourceOneandOneVPNRead(d, meta)
}
func resourceOneandOneVPNUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("name") || d.HasChange("description") {
vpn, err := config.API.ModifyVPN(d.Id(), d.Get("name").(string), d.Get("description").(string))
if err != nil {
return err
}
err = config.API.WaitForState(vpn, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
return resourceOneandOneVPNRead(d, meta)
}
func resourceOneandOneVPNRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
vpn, err := config.API.GetVPN(d.Id())
base64_str, err := config.API.GetVPNConfigFile(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
var download_path string
if raw, ok := d.GetOk("download_path"); ok {
download_path = raw.(string)
}
path, fileName, err := writeCofnig(vpn, download_path, base64_str)
if err != nil {
return err
}
d.Set("name", vpn.Name)
d.Set("description", vpn.Description)
d.Set("download_path", path)
d.Set("file_name", fileName)
d.Set("datacenter", vpn.Datacenter.CountryCode)
return nil
}
func writeCofnig(vpn *oneandone.VPN, path, base64config string) (string, string, error) {
data, err := base64.StdEncoding.DecodeString(base64config)
if err != nil {
return "", "", err
}
var fileName string
if vpn.CloudPanelId != "" {
fileName = vpn.CloudPanelId + ".zip"
} else {
fileName = "vpn_" + fmt.Sprintf("%x", md5.Sum(data)) + ".zip"
}
if path == "" {
path, err = os.Getwd()
if err != nil {
return "", "", err
}
}
if !fp.IsAbs(path) {
path, err = fp.Abs(path)
if err != nil {
return "", "", err
}
}
_, err = os.Stat(path)
if err != nil {
if os.IsNotExist(err) {
// make all dirs
os.MkdirAll(path, 0666)
} else {
return "", "", err
}
}
fpath := fp.Join(path, fileName)
f, err := os.OpenFile(fpath, os.O_CREATE|os.O_WRONLY, 0666)
defer f.Close()
if err != nil {
return "", "", err
}
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
}
if err != nil {
return "", "", err
}
return path, fileName, nil
}
func resourceOneandOneVPNDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
vpn, err := config.API.DeleteVPN(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(vpn)
if err != nil {
return err
}
fullPath := fp.Join(d.Get("download_path").(string), d.Get("file_name").(string))
if _, err := os.Stat(fullPath); !os.IsNotExist(err) {
os.Remove(fullPath)
}
return nil
}

View File

@ -0,0 +1,119 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandoneVpn_Basic(t *testing.T) {
var server oneandone.VPN
name := "test"
name_updated := "test1"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandoneVPNDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneVPN_basic, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneVPNExists("oneandone_vpn.vpn", &server),
testAccCheckOneandoneVPNAttributes("oneandone_vpn.vpn", name),
resource.TestCheckResourceAttr("oneandone_vpn.vpn", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneVPN_basic, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneVPNExists("oneandone_vpn.vpn", &server),
testAccCheckOneandoneVPNAttributes("oneandone_vpn.vpn", name_updated),
resource.TestCheckResourceAttr("oneandone_vpn.vpn", "name", name_updated),
),
},
},
})
}
func testAccCheckDOneandoneVPNDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_server" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetVPN(rs.Primary.ID)
if err == nil {
return fmt.Errorf("VPN still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandoneVPNAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandoneVPNExists(n string, server *oneandone.VPN) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_server, err := api.GetVPN(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching VPN: %s", rs.Primary.ID)
}
if found_server.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
server = found_server
return nil
}
}
const testAccCheckOneandoneVPN_basic = `
resource "oneandone_vpn" "vpn" {
datacenter = "GB"
name = "%s"
description = "ttest descr"
}`

View File

@ -0,0 +1,256 @@
package oneandone
import (
"fmt"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/schema"
"strings"
)
func resourceOneandOneSharedStorage() *schema.Resource {
return &schema.Resource{
Create: resourceOneandOneSharedStorageCreate,
Read: resourceOneandOneSharedStorageRead,
Update: resourceOneandOneSharedStorageUpdate,
Delete: resourceOneandOneSharedStorageDelete,
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"description": {
Type: schema.TypeString,
Optional: true,
},
"size": {
Type: schema.TypeInt,
Required: true,
},
"datacenter": {
Type: schema.TypeString,
Required: true,
},
"storage_servers": {
Type: schema.TypeList,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"id": {
Type: schema.TypeString,
Required: true,
},
"rights": {
Type: schema.TypeString,
Required: true,
},
},
},
Optional: true,
},
},
}
}
func resourceOneandOneSharedStorageCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
req := oneandone.SharedStorageRequest{
Name: d.Get("name").(string),
Size: oneandone.Int2Pointer(d.Get("size").(int)),
}
if raw, ok := d.GetOk("description"); ok {
req.Description = raw.(string)
}
if raw, ok := d.GetOk("datacenter"); ok {
dcs, err := config.API.ListDatacenters()
if err != nil {
return fmt.Errorf("An error occured while fetching list of datacenters %s", err)
}
decenter := raw.(string)
for _, dc := range dcs {
if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) {
req.DatacenterId = dc.Id
break
}
}
}
ss_id, ss, err := config.API.CreateSharedStorage(&req)
if err != nil {
return err
}
err = config.API.WaitForState(ss, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
d.SetId(ss_id)
if raw, ok := d.GetOk("storage_servers"); ok {
storage_servers := []oneandone.SharedStorageServer{}
rawRights := raw.([]interface{})
for _, raws_ss := range rawRights {
ss := raws_ss.(map[string]interface{})
storage_server := oneandone.SharedStorageServer{
Id: ss["id"].(string),
Rights: ss["rights"].(string),
}
storage_servers = append(storage_servers, storage_server)
}
ss, err := config.API.AddSharedStorageServers(ss_id, storage_servers)
if err != nil {
return err
}
err = config.API.WaitForState(ss, "ACTIVE", 10, 30)
if err != nil {
return err
}
}
return resourceOneandOneSharedStorageRead(d, meta)
}
func resourceOneandOneSharedStorageUpdate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
if d.HasChange("name") || d.HasChange("description") || d.HasChange("size") {
ssu := oneandone.SharedStorageRequest{}
if d.HasChange("name") {
_, n := d.GetChange("name")
ssu.Name = n.(string)
}
if d.HasChange("description") {
_, n := d.GetChange("description")
ssu.Description = n.(string)
}
if d.HasChange("size") {
_, n := d.GetChange("size")
ssu.Size = oneandone.Int2Pointer(n.(int))
}
ss, err := config.API.UpdateSharedStorage(d.Id(), &ssu)
if err != nil {
return err
}
err = config.API.WaitForState(ss, "ACTIVE", 10, 30)
if err != nil {
return err
}
}
if d.HasChange("storage_servers") {
o, n := d.GetChange("storage_servers")
oldV := o.([]interface{})
for _, old := range oldV {
ol := old.(map[string]interface{})
ss, err := config.API.DeleteSharedStorageServer(d.Id(), ol["id"].(string))
if err != nil {
return err
}
err = config.API.WaitForState(ss, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
newV := n.([]interface{})
ids := []oneandone.SharedStorageServer{}
for _, newValue := range newV {
nn := newValue.(map[string]interface{})
ids = append(ids, oneandone.SharedStorageServer{
Id: nn["id"].(string),
Rights: nn["rights"].(string),
})
}
if len(ids) > 0 {
ss, err := config.API.AddSharedStorageServers(d.Id(), ids)
if err != nil {
return err
}
err = config.API.WaitForState(ss, "ACTIVE", 10, config.Retries)
if err != nil {
return err
}
}
//DeleteSharedStorageServer
}
return resourceOneandOneSharedStorageRead(d, meta)
}
func resourceOneandOneSharedStorageRead(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
ss, err := config.API.GetSharedStorage(d.Id())
if err != nil {
if strings.Contains(err.Error(), "404") {
d.SetId("")
return nil
}
return err
}
d.Set("name", ss.Name)
d.Set("description", ss.Description)
d.Set("size", ss.Size)
d.Set("datacenter", ss.Datacenter.CountryCode)
d.Set("storage_servers", getStorageServers(ss.Servers))
return nil
}
func getStorageServers(servers []oneandone.SharedStorageServer) []map[string]interface{} {
raw := make([]map[string]interface{}, 0, len(servers))
for _, server := range servers {
toadd := map[string]interface{}{
"id": server.Id,
"rights": server.Rights,
}
raw = append(raw, toadd)
}
return raw
}
func resourceOneandOneSharedStorageDelete(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config)
ss, err := config.API.DeleteSharedStorage(d.Id())
if err != nil {
return err
}
err = config.API.WaitUntilDeleted(ss)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,120 @@
package oneandone
import (
"fmt"
"testing"
"github.com/1and1/oneandone-cloudserver-sdk-go"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
"os"
"time"
)
func TestAccOneandoneSharedStorage_Basic(t *testing.T) {
var storage oneandone.SharedStorage
name := "test_storage"
name_updated := "test1"
resource.Test(t, resource.TestCase{
PreCheck: func() {
testAccPreCheck(t)
},
Providers: testAccProviders,
CheckDestroy: testAccCheckDOneandoneSharedStorageDestroyCheck,
Steps: []resource.TestStep{
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneSharedStorage_basic, name),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneSharedStorageExists("oneandone_shared_storage.storage", &storage),
testAccCheckOneandoneSharedStorageAttributes("oneandone_shared_storage.storage", name),
resource.TestCheckResourceAttr("oneandone_shared_storage.storage", "name", name),
),
},
resource.TestStep{
Config: fmt.Sprintf(testAccCheckOneandoneSharedStorage_basic, name_updated),
Check: resource.ComposeTestCheckFunc(
func(*terraform.State) error {
time.Sleep(10 * time.Second)
return nil
},
testAccCheckOneandoneSharedStorageExists("oneandone_shared_storage.storage", &storage),
testAccCheckOneandoneSharedStorageAttributes("oneandone_shared_storage.storage", name_updated),
resource.TestCheckResourceAttr("oneandone_shared_storage.storage", "name", name_updated),
),
},
},
})
}
func testAccCheckDOneandoneSharedStorageDestroyCheck(s *terraform.State) error {
for _, rs := range s.RootModule().Resources {
if rs.Type != "oneandone_shared_storage" {
continue
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
_, err := api.GetVPN(rs.Primary.ID)
if err == nil {
return fmt.Errorf("VPN still exists %s %s", rs.Primary.ID, err.Error())
}
}
return nil
}
func testAccCheckOneandoneSharedStorageAttributes(n string, reverse_dns string) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.Attributes["name"] != reverse_dns {
return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"])
}
return nil
}
}
func testAccCheckOneandoneSharedStorageExists(n string, storage *oneandone.SharedStorage) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No Record ID is set")
}
api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl)
found_storage, err := api.GetSharedStorage(rs.Primary.ID)
if err != nil {
return fmt.Errorf("Error occured while fetching SharedStorage: %s", rs.Primary.ID)
}
if found_storage.Id != rs.Primary.ID {
return fmt.Errorf("Record not found")
}
storage = found_storage
return nil
}
}
const testAccCheckOneandoneSharedStorage_basic = `
resource "oneandone_shared_storage" "storage" {
name = "%s"
description = "ttt"
size = 50
datacenter = "GB"
}`

View File

@ -80,6 +80,7 @@ func resourceInstance() *schema.Resource {
"label": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},

View File

@ -136,6 +136,27 @@ func TestAccOPCInstance_storage(t *testing.T) {
})
}
func TestAccOPCInstance_emptyLabel(t *testing.T) {
resName := "opc_compute_instance.test"
rInt := acctest.RandInt()
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccOPCCheckInstanceDestroy,
Steps: []resource.TestStep{
{
Config: testAccInstanceEmptyLabel(rInt),
Check: resource.ComposeTestCheckFunc(
testAccOPCCheckInstanceExists,
resource.TestCheckResourceAttr(resName, "name", fmt.Sprintf("acc-test-instance-%d", rInt)),
resource.TestCheckResourceAttrSet(resName, "label"),
),
},
},
})
}
func testAccOPCCheckInstanceExists(s *terraform.State) error {
client := testAccProvider.Meta().(*compute.Client).Instances()
@ -271,3 +292,17 @@ resource "opc_compute_instance" "test" {
}
}`, rInt, rInt, rInt)
}
func testAccInstanceEmptyLabel(rInt int) string {
return fmt.Sprintf(`
resource "opc_compute_instance" "test" {
name = "acc-test-instance-%d"
shape = "oc3"
image_list = "/oracle/public/oel_6.7_apaas_16.4.5_1610211300"
instance_attributes = <<JSON
{
"foo": "bar"
}
JSON
}`, rInt)
}

View File

@ -11,10 +11,10 @@ sudo rabbitmq-plugins enable rabbitmq_management
sudo wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
sudo chmod +x /usr/local/bin/gimme
gimme 1.6 >> .bashrc
gimme 1.8 >> .bashrc
mkdir ~/go
eval "$(/usr/local/bin/gimme 1.6)"
eval "$(/usr/local/bin/gimme 1.8)"
echo 'export GOPATH=$HOME/go' >> .bashrc
export GOPATH=$HOME/go
@ -24,3 +24,9 @@ source .bashrc
go get -u github.com/kardianos/govendor
go get github.com/hashicorp/terraform
cat <<EOF > ~/rabbitmqrc
export RABBITMQ_ENDPOINT="http://127.0.0.1:15672"
export RABBITMQ_USERNAME="guest"
export RABBITMQ_PASSWORD="guest"
EOF

View File

@ -11,7 +11,7 @@ import (
"github.com/hashicorp/terraform/terraform"
)
func TestAccBinding(t *testing.T) {
func TestAccBinding_basic(t *testing.T) {
var bindingInfo rabbithole.BindingInfo
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
@ -28,6 +28,23 @@ func TestAccBinding(t *testing.T) {
})
}
func TestAccBinding_propertiesKey(t *testing.T) {
var bindingInfo rabbithole.BindingInfo
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccBindingCheckDestroy(bindingInfo),
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccBindingConfig_propertiesKey,
Check: testAccBindingCheck(
"rabbitmq_binding.test", &bindingInfo,
),
},
},
})
}
func testAccBindingCheck(rn string, bindingInfo *rabbithole.BindingInfo) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[rn]
@ -119,3 +136,47 @@ resource "rabbitmq_binding" "test" {
routing_key = "#"
properties_key = "%23"
}`
const testAccBindingConfig_propertiesKey = `
resource "rabbitmq_vhost" "test" {
name = "test"
}
resource "rabbitmq_permissions" "guest" {
user = "guest"
vhost = "${rabbitmq_vhost.test.name}"
permissions {
configure = ".*"
write = ".*"
read = ".*"
}
}
resource "rabbitmq_exchange" "test" {
name = "Test"
vhost = "${rabbitmq_permissions.guest.vhost}"
settings {
type = "topic"
durable = true
auto_delete = false
}
}
resource "rabbitmq_queue" "test" {
name = "Test.Queue"
vhost = "${rabbitmq_permissions.guest.vhost}"
settings {
durable = true
auto_delete = false
}
}
resource "rabbitmq_binding" "test" {
source = "${rabbitmq_exchange.test.name}"
vhost = "${rabbitmq_vhost.test.name}"
destination = "${rabbitmq_queue.test.name}"
destination_type = "queue"
routing_key = "ANYTHING.#"
properties_key = "ANYTHING.%23"
}
`

View File

@ -76,14 +76,12 @@ func resourceRancherStack() *schema.Resource {
Optional: true,
},
"rendered_docker_compose": {
Type: schema.TypeString,
Computed: true,
DiffSuppressFunc: suppressComposeDiff,
Type: schema.TypeString,
Computed: true,
},
"rendered_rancher_compose": {
Type: schema.TypeString,
Computed: true,
DiffSuppressFunc: suppressComposeDiff,
Type: schema.TypeString,
Computed: true,
},
},
}

View File

@ -140,7 +140,7 @@ func renderPartsToWriter(parts cloudInitParts, writer io.Writer) error {
}
writer.Write([]byte(fmt.Sprintf("Content-Type: multipart/mixed; boundary=\"%s\"\n", mimeWriter.Boundary())))
writer.Write([]byte("MIME-Version: 1.0\r\n"))
writer.Write([]byte("MIME-Version: 1.0\r\n\r\n"))
for _, part := range parts {
header := textproto.MIMEHeader{}

View File

@ -22,7 +22,7 @@ func TestRender(t *testing.T) {
content = "baz"
}
}`,
"Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n",
"Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n",
},
{
`data "template_cloudinit_config" "foo" {
@ -35,7 +35,7 @@ func TestRender(t *testing.T) {
filename = "foobar.sh"
}
}`,
"Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDARY\r\nContent-Disposition: attachment; filename=\"foobar.sh\"\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n",
"Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n\r\n--MIMEBOUNDARY\r\nContent-Disposition: attachment; filename=\"foobar.sh\"\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n",
},
{
`data "template_cloudinit_config" "foo" {
@ -51,7 +51,7 @@ func TestRender(t *testing.T) {
content = "ffbaz"
}
}`,
"Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDARY--\r\n",
"Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDARY--\r\n",
},
}

View File

@ -30,6 +30,7 @@ func (c *InitCommand) Run(args []string) int {
cmdFlags.BoolVar(&c.forceInitCopy, "force-copy", false, "suppress prompts about copying state data")
cmdFlags.BoolVar(&c.Meta.stateLock, "lock", true, "lock state")
cmdFlags.DurationVar(&c.Meta.stateLockTimeout, "lock-timeout", 0, "lock timeout")
cmdFlags.BoolVar(&c.reconfigure, "reconfigure", false, "reconfigure")
cmdFlags.Usage = func() { c.Ui.Error(c.Help()) }
if err := cmdFlags.Parse(args); err != nil {
@ -223,6 +224,10 @@ Options:
times. The backend type must be in the configuration
itself.
-force-copy Suppress prompts about copying state data. This is
equivalent to providing a "yes" to all confirmation
prompts.
-get=true Download any modules for this configuration.
-input=true Ask for input if necessary. If false, will error if
@ -234,9 +239,7 @@ Options:
-no-color If specified, output won't contain any color.
-force-copy Suppress prompts about copying state data. This is
equivalent to providing a "yes" to all confirmation
prompts.
-reconfigure Reconfigure the backend, ignoring any saved configuration.
`
return strings.TrimSpace(helpText)
}

View File

@ -47,6 +47,7 @@ import (
nomadprovider "github.com/hashicorp/terraform/builtin/providers/nomad"
ns1provider "github.com/hashicorp/terraform/builtin/providers/ns1"
nullprovider "github.com/hashicorp/terraform/builtin/providers/null"
oneandoneprovider "github.com/hashicorp/terraform/builtin/providers/oneandone"
opcprovider "github.com/hashicorp/terraform/builtin/providers/opc"
openstackprovider "github.com/hashicorp/terraform/builtin/providers/openstack"
opsgenieprovider "github.com/hashicorp/terraform/builtin/providers/opsgenie"
@ -125,6 +126,7 @@ var InternalProviders = map[string]plugin.ProviderFunc{
"nomad": nomadprovider.Provider,
"ns1": ns1provider.Provider,
"null": nullprovider.Provider,
"oneandone": oneandoneprovider.Provider,
"opc": opcprovider.Provider,
"openstack": openstackprovider.Provider,
"opsgenie": opsgenieprovider.Provider,

View File

@ -95,6 +95,8 @@ type Meta struct {
//
// forceInitCopy suppresses confirmation for copying state data during
// init.
//
// reconfigure forces init to ignore any stored configuration.
statePath string
stateOutPath string
backupPath string
@ -104,6 +106,7 @@ type Meta struct {
stateLock bool
stateLockTimeout time.Duration
forceInitCopy bool
reconfigure bool
}
// initStatePaths is used to initialize the default values for

View File

@ -352,6 +352,13 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, error) {
s = terraform.NewState()
}
// if we want to force reconfiguration of the backend, we set the backend
// state to nil on this copy. This will direct us through the correct
// configuration path in the switch statement below.
if m.reconfigure {
s.Backend = nil
}
// Upon return, we want to set the state we're using in-memory so that
// we can access it for commands.
m.backendState = nil

View File

@ -983,6 +983,59 @@ func TestMetaBackend_configuredChange(t *testing.T) {
}
}
// Reconfiguring with an already configured backend.
// This should ignore the existing backend config, and configure the new
// backend is if this is the first time.
func TestMetaBackend_reconfigureChange(t *testing.T) {
// Create a temporary working directory that is empty
td := tempDir(t)
copy.CopyDir(testFixturePath("backend-change-single-to-single"), td)
defer os.RemoveAll(td)
defer testChdir(t, td)()
// Register the single-state backend
backendinit.Set("local-single", backendlocal.TestNewLocalSingle)
defer backendinit.Set("local-single", nil)
// Setup the meta
m := testMetaBackend(t, nil)
// this should not ask for input
m.input = false
// cli flag -reconfigure
m.reconfigure = true
// Get the backend
b, err := m.Backend(&BackendOpts{Init: true})
if err != nil {
t.Fatalf("bad: %s", err)
}
// Check the state
s, err := b.State(backend.DefaultStateName)
if err != nil {
t.Fatalf("bad: %s", err)
}
if err := s.RefreshState(); err != nil {
t.Fatalf("bad: %s", err)
}
newState := s.State()
if newState != nil || !newState.Empty() {
t.Fatal("state should be nil/empty after forced reconfiguration")
}
// verify that the old state is still there
s = (&state.LocalState{Path: "local-state.tfstate"})
if err := s.RefreshState(); err != nil {
t.Fatal(err)
}
oldState := s.State()
if oldState == nil || oldState.Empty() {
t.Fatal("original state should be untouched")
}
}
// Changing a configured backend, copying state
func TestMetaBackend_configuredChangeCopy(t *testing.T) {
// Create a temporary working directory that is empty

View File

@ -63,12 +63,14 @@ func Funcs() map[string]ast.Function {
"cidrnetmask": interpolationFuncCidrNetmask(),
"cidrsubnet": interpolationFuncCidrSubnet(),
"coalesce": interpolationFuncCoalesce(),
"coalescelist": interpolationFuncCoalesceList(),
"compact": interpolationFuncCompact(),
"concat": interpolationFuncConcat(),
"dirname": interpolationFuncDirname(),
"distinct": interpolationFuncDistinct(),
"element": interpolationFuncElement(),
"file": interpolationFuncFile(),
"matchkeys": interpolationFuncMatchKeys(),
"floor": interpolationFuncFloor(),
"format": interpolationFuncFormat(),
"formatlist": interpolationFuncFormatList(),
@ -323,6 +325,30 @@ func interpolationFuncCoalesce() ast.Function {
}
}
// interpolationFuncCoalesceList implements the "coalescelist" function that
// returns the first non empty list from the provided input
func interpolationFuncCoalesceList() ast.Function {
return ast.Function{
ArgTypes: []ast.Type{ast.TypeList},
ReturnType: ast.TypeList,
Variadic: true,
VariadicType: ast.TypeList,
Callback: func(args []interface{}) (interface{}, error) {
if len(args) < 2 {
return nil, fmt.Errorf("must provide at least two arguments")
}
for _, arg := range args {
argument := arg.([]ast.Variable)
if len(argument) > 0 {
return argument, nil
}
}
return make([]ast.Variable, 0), nil
},
}
}
// interpolationFuncConcat implements the "concat" function that concatenates
// multiple lists.
func interpolationFuncConcat() ast.Function {
@ -668,6 +694,57 @@ func appendIfMissing(slice []string, element string) []string {
return append(slice, element)
}
// for two lists `keys` and `values` of equal length, returns all elements
// from `values` where the corresponding element from `keys` is in `searchset`.
func interpolationFuncMatchKeys() ast.Function {
return ast.Function{
ArgTypes: []ast.Type{ast.TypeList, ast.TypeList, ast.TypeList},
ReturnType: ast.TypeList,
Callback: func(args []interface{}) (interface{}, error) {
output := make([]ast.Variable, 0)
values, _ := args[0].([]ast.Variable)
keys, _ := args[1].([]ast.Variable)
searchset, _ := args[2].([]ast.Variable)
if len(keys) != len(values) {
return nil, fmt.Errorf("length of keys and values should be equal")
}
for i, key := range keys {
for _, search := range searchset {
if res, err := compareSimpleVariables(key, search); err != nil {
return nil, err
} else if res == true {
output = append(output, values[i])
break
}
}
}
// if searchset is empty, then output is an empty list as well.
// if we haven't matched any key, then output is an empty list.
return output, nil
},
}
}
// compare two variables of the same type, i.e. non complex one, such as TypeList or TypeMap
func compareSimpleVariables(a, b ast.Variable) (bool, error) {
if a.Type != b.Type {
return false, fmt.Errorf(
"won't compare items of different types %s and %s",
a.Type.Printable(), b.Type.Printable())
}
switch a.Type {
case ast.TypeString:
return a.Value.(string) == b.Value.(string), nil
default:
return false, fmt.Errorf(
"can't compare items of type %s",
a.Type.Printable())
}
}
// interpolationFuncJoin implements the "join" function that allows
// multi-variable values to be joined by some character.
func interpolationFuncJoin() ast.Function {

View File

@ -684,6 +684,33 @@ func TestInterpolateFuncCoalesce(t *testing.T) {
})
}
func TestInterpolateFuncCoalesceList(t *testing.T) {
testFunction(t, testFunctionConfig{
Cases: []testFunctionCase{
{
`${coalescelist(list("first"), list("second"), list("third"))}`,
[]interface{}{"first"},
false,
},
{
`${coalescelist(list(), list("second"), list("third"))}`,
[]interface{}{"second"},
false,
},
{
`${coalescelist(list(), list(), list())}`,
[]interface{}{},
false,
},
{
`${coalescelist(list("foo"))}`,
nil,
true,
},
},
})
}
func TestInterpolateFuncConcat(t *testing.T) {
testFunction(t, testFunctionConfig{
Cases: []testFunctionCase{
@ -964,6 +991,74 @@ func TestInterpolateFuncDistinct(t *testing.T) {
})
}
func TestInterpolateFuncMatchKeys(t *testing.T) {
testFunction(t, testFunctionConfig{
Cases: []testFunctionCase{
// normal usage
{
`${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2"))}`,
[]interface{}{"b"},
false,
},
// normal usage 2, check the order
{
`${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2", "ref1"))}`,
[]interface{}{"a", "b"},
false,
},
// duplicate item in searchset
{
`${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2", "ref2"))}`,
[]interface{}{"b"},
false,
},
// no matches
{
`${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref4"))}`,
[]interface{}{},
false,
},
// no matches 2
{
`${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list())}`,
[]interface{}{},
false,
},
// zero case
{
`${matchkeys(list(), list(), list("nope"))}`,
[]interface{}{},
false,
},
// complex values
{
`${matchkeys(list(list("a", "a")), list("a"), list("a"))}`,
[]interface{}{[]interface{}{"a", "a"}},
false,
},
// errors
// different types
{
`${matchkeys(list("a"), list(1), list("a"))}`,
nil,
true,
},
// different types
{
`${matchkeys(list("a"), list(list("a"), list("a")), list("a"))}`,
nil,
true,
},
// lists of different length is an error
{
`${matchkeys(list("a"), list("a", "b"), list("a"))}`,
nil,
true,
},
},
})
}
func TestInterpolateFuncFile(t *testing.T) {
tf, err := ioutil.TempFile("", "tf")
if err != nil {

View File

@ -645,6 +645,19 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error {
}
}
// Computed-only field
if v.Computed && !v.Optional {
if v.ValidateFunc != nil {
return fmt.Errorf("%s: ValidateFunc is for validating user input, "+
"there's nothing to validate on computed-only field", k)
}
if v.DiffSuppressFunc != nil {
return fmt.Errorf("%s: DiffSuppressFunc is for suppressing differences"+
" between config and state representation. "+
"There is no config for computed-only field, nothing to compare.", k)
}
}
if v.ValidateFunc != nil {
switch v.Type {
case TypeList, TypeSet:
@ -744,6 +757,7 @@ func (m schemaMap) diffList(
diff.Attributes[k+".#"] = &terraform.ResourceAttrDiff{
Old: oldStr,
NewComputed: true,
RequiresNew: schema.ForceNew,
}
return nil
}

View File

@ -2777,6 +2777,52 @@ func TestSchemaMap_Diff(t *testing.T) {
},
},
},
{
Name: "List with computed schema and ForceNew",
Schema: map[string]*Schema{
"config": &Schema{
Type: TypeList,
Optional: true,
ForceNew: true,
Elem: &Schema{
Type: TypeString,
},
},
},
State: &terraform.InstanceState{
Attributes: map[string]string{
"config.#": "2",
"config.0": "a",
"config.1": "b",
},
},
Config: map[string]interface{}{
"config": []interface{}{"${var.a}", "${var.b}"},
},
ConfigVariables: map[string]ast.Variable{
"var.a": interfaceToVariableSwallowError(
config.UnknownVariableValue),
"var.b": interfaceToVariableSwallowError(
config.UnknownVariableValue),
},
Diff: &terraform.InstanceDiff{
Attributes: map[string]*terraform.ResourceAttrDiff{
"config.#": &terraform.ResourceAttrDiff{
Old: "2",
New: "",
RequiresNew: true,
NewComputed: true,
},
},
},
Err: false,
},
}
for i, tc := range cases {
@ -3279,16 +3325,46 @@ func TestSchemaMap_InternalValidate(t *testing.T) {
},
true,
},
"computed-only field with validateFunc": {
map[string]*Schema{
"string": &Schema{
Type: TypeString,
Computed: true,
ValidateFunc: func(v interface{}, k string) (ws []string, es []error) {
es = append(es, fmt.Errorf("this is not fine"))
return
},
},
},
true,
},
"computed-only field with diffSuppressFunc": {
map[string]*Schema{
"string": &Schema{
Type: TypeString,
Computed: true,
DiffSuppressFunc: func(k, old, new string, d *ResourceData) bool {
// Always suppress any diff
return false
},
},
},
true,
},
}
for tn, tc := range cases {
err := schemaMap(tc.In).InternalValidate(nil)
if err != nil != tc.Err {
if tc.Err {
t.Fatalf("%q: Expected error did not occur:\n\n%#v", tn, tc.In)
t.Run(tn, func(t *testing.T) {
err := schemaMap(tc.In).InternalValidate(nil)
if err != nil != tc.Err {
if tc.Err {
t.Fatalf("%q: Expected error did not occur:\n\n%#v", tn, tc.In)
}
t.Fatalf("%q: Unexpected error occurred: %s\n\n%#v", tn, err, tc.In)
}
t.Fatalf("%q: Unexpected error occurred:\n\n%#v", tn, tc.In)
}
})
}
}

View File

@ -4430,6 +4430,7 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) {
p.ApplyFn = testApplyFn
p.DiffFn = testDiffFn
var l sync.Mutex
var calls []string
pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error {
val, ok := c.Config["foo"]
@ -4437,6 +4438,8 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) {
t.Fatalf("bad value for foo: %v %#v", val, c)
}
l.Lock()
defer l.Unlock()
calls = append(calls, val.(string))
return fmt.Errorf("provisioner error")
}
@ -4501,6 +4504,7 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) {
p.ApplyFn = testApplyFn
p.DiffFn = testDiffFn
var l sync.Mutex
var calls []string
pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error {
val, ok := c.Config["foo"]
@ -4508,6 +4512,8 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) {
t.Fatalf("bad value for foo: %v %#v", val, c)
}
l.Lock()
defer l.Unlock()
calls = append(calls, val.(string))
return fmt.Errorf("provisioner error")
}

View File

@ -532,6 +532,9 @@ func TestContext2Plan_moduleProviderInherit(t *testing.T) {
state *InstanceState,
c *ResourceConfig) (*InstanceDiff, error) {
v, _ := c.Get("from")
l.Lock()
defer l.Unlock()
calls = append(calls, v.(string))
return testDiffFn(info, state, c)
}
@ -628,6 +631,9 @@ func TestContext2Plan_moduleProviderDefaults(t *testing.T) {
state *InstanceState,
c *ResourceConfig) (*InstanceDiff, error) {
v, _ := c.Get("from")
l.Lock()
defer l.Unlock()
calls = append(calls, v.(string))
return testDiffFn(info, state, c)
}
@ -677,6 +683,8 @@ func TestContext2Plan_moduleProviderDefaultsVar(t *testing.T) {
buf.WriteString(v.(string) + "\n")
}
l.Lock()
defer l.Unlock()
calls = append(calls, buf.String())
return nil
}

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2016 1&1 Internet SE
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,36 @@
package oneandone
import "net/http"
type Datacenter struct {
idField
CountryCode string `json:"country_code,omitempty"`
Location string `json:"location,omitempty"`
}
// GET /datacenters
func (api *API) ListDatacenters(args ...interface{}) ([]Datacenter, error) {
url, err := processQueryParams(createUrl(api, datacenterPathSegment), args...)
if err != nil {
return nil, err
}
result := []Datacenter{}
err = api.Client.Get(url, &result, http.StatusOK)
if err != nil {
return nil, err
}
return result, nil
}
// GET /datacenters/{datacenter_id}
func (api *API) GetDatacenter(dc_id string) (*Datacenter, error) {
result := new(Datacenter)
url := createUrl(api, datacenterPathSegment, dc_id)
err := api.Client.Get(url, &result, http.StatusOK)
if err != nil {
return nil, err
}
return result, nil
}

Some files were not shown because too many files have changed in this diff Show More