diff --git a/CHANGELOG.md b/CHANGELOG.md index 704ec194d..f882347d8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,10 +1,21 @@ ## 0.9.4 (Unreleased) +BACKWARDS INCOMPATIBILITIES / NOTES: + + * provider/template: Fix invalid MIME formatting in `template_cloudinit_config`. + While the change itself is not breaking the data source it may be referenced + e.g. in `aws_launch_configuration` and similar resources which are immutable + and the formatting change will therefore trigger recreation [GH-13752] + FEATURES: * **New Provider:** `opc` - Oracle Public Cloud [GH-13468] +* **New Provider:** `oneandone` [GH-13633] +* **New Data Source:** `aws_ami_ids` [GH-13844] +* **New Data Source:** `aws_ebs_snapshot_ids` [GH-13844] * **New Data Source:** `aws_kms_alias` [GH-13669] * **New Data Source:** `aws_kinesis_stream` [GH-13562] +* **New Data Source:** `digitalocean_image` [GH-13787] * **New Data Source:** `google_compute_network` [GH-12442] * **New Data Source:** `google_compute_subnetwork` [GH-12442] * **New Resource:** `local_file` for creating local files (please see the docs for caveats) [GH-12757] @@ -14,46 +25,77 @@ FEATURES: * **New Resource:** `alicloud_ess_schedule` [GH-13731] * **New Resource:** `alicloud_snat_entry` [GH-13731] * **New Resource:** `alicloud_forward_entry` [GH-13731] +* **New Resource:** `aws_cognito_identity_pool` [GH-13783] +* **New Resource:**  `aws_network_interface_attachment` [GH-13861] * **New Resource:** `github_branch_protection` [GH-10476] * **New Resource:** `google_bigquery_dataset` [GH-13436] +* **New Interpolation Function:** `coalescelist()` [GH-12537] IMPROVEMENTS: + + * core: Add a `-reconfigure` flag to the `init` command, to configure a backend while ignoring any saved configuration [GH-13825] + * helper/schema: Disallow validation+diff suppression on computed fields [GH-13878] * config: The interpolation function `cidrhost` now accepts a negative host number to count backwards from the end of the range [GH-13765] + * config: New interpolation function `matchkeys` for using values from one list to filter corresponding values from another list using a matching set. [GH-13847] * state/remote/swift: Support Openstack request logging [GH-13583] * provider/aws: Add an option to skip getting the supported EC2 platforms [GH-13672] * provider/aws: Add `name_prefix` support to `aws_cloudwatch_log_group` [GH-13273] * provider/aws: Add `bucket_prefix` to `aws_s3_bucket` [GH-13274] + * provider/aws: Add replica_source_db to the aws_db_instance datasource [GH-13842] + * provider/aws: Add IPv6 outputs to aws_subnet datasource [GH-13841] + * provider/aws: Exercise SecondaryPrivateIpAddressCount for network interface [GH-10590] + * provider/aws: Expose execution ARN + invoke URL for APIG deployment [GH-13889] + * provider/aws: Expose invoke ARN from Lambda function (for API Gateway) [GH-13890] + * provider/aws: Add tagging support to the 'aws_lambda_function' resource [GH-13873] + * provider/aws: Validate WAF metric names [GH-13885] + * provider/aws: Allow AWS Subnet to change IPv6 CIDR Block without ForceNew [GH-13909] * provider/azurerm: VM Scale Sets - import support [GH-13464] * provider/azurerm: Allow Azure China region support [GH-13767] * provider/digitalocean: Export droplet prices [GH-13720] + * provider/fastly: Add support for GCS logging [GH-13553] * provider/google: `google_compute_address` and `google_compute_global_address` are now importable [GH-13270] + * provider/google: `google_compute_network` is now importable [GH-13834] + * provider/heroku: Set App buildpacks from config [GH-13910] * provider/vault: `vault_generic_secret` resource can now optionally detect drift if it has appropriate access [GH-11776] BUG FIXES: + * core: Prevent resource.Retry from adding untracked resources after the timeout: [GH-13778] + * core: Allow a schema.TypeList to be ForceNew and computed [GH-13863] + * core: Fix crash when refresh or apply build an invalid graph [GH-13665] * core: Add the close provider/provisioner transformers back [GH-13102] * core: Fix a crash condition by improving the flatmap.Expand() logic [GH-13541] * provider/alicloud: Fix create PrePaid instance [GH-13662] * provider/alicloud: Fix allocate public ip error [GH-13268] * provider/alicloud: alicloud_security_group_rule: check ptr before use it [GH-13731) * provider/alicloud: alicloud_instance: fix ecs internet_max_bandwidth_out cannot set zero bug [GH-13731] + * provider/aws: Allow force-destroying `aws_route53_zone` which has trailing dot [GH-12421] + * provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes [GH-13699] + * provider/aws: Changing aws_opsworks_instance should ForceNew [GH-13839] * provider/aws: Fix DB Parameter Group Name [GH-13279] + * provider/aws: Fix issue importing some Security Groups and Rules based on rule structure [GH-13630] + * provider/aws: Fix issue for cross account IAM role with `aws_lambda_permission` [GH-13865] + * provider/aws: Fix WAF IPSet descriptors removal on update [GH-13766] * provider/aws: Increase default number of retries from 11 to 25 [GH-13673] - * provider/aws: Use mutex & retry for WAF change operations [GH-13656] * provider/aws: Remove aws_vpc_dhcp_options if not found [GH-13610] * provider/aws: Remove aws_network_acl_rule if not found [GH-13608] - * provider/aws: Allow GovCloud KMS ARNs to pass validation in `kms_key_id` attributes [GH-13699] + * provider/aws: Use mutex & retry for WAF change operations [GH-13656] + * provider/aws: Adding support for ipv6 to aws_subnets needs migration [GH-13876] * provider/azurerm: azurerm_redis_cache resource missing hostname [GH-13650] * provider/azurerm: Locking around Network Security Group / Subnets [GH-13637] * provider/azurerm: Locking route table on subnet create/delete [GH-13791] * provider/azurerm: VM's - fixes a bug where ssh_keys could contain a null entry [GH-13755] + * provider/azurerm: VM's - ignoring the case on the `create_option` field during Diff's [GH-13933] + * provider/azurerm: fixing a bug refreshing the `azurerm_redis_cache` [GH-13899] * provider/fastly: Fix issue with using 0 for `default_ttl` [GH-13648] * provider/fastly: Add ability to associate a healthcheck to a backend [GH-13539] * provider/google: Stop setting the id when project creation fails [GH-13644] + * provider/google: Make ports in resource_compute_forwarding_rule ForceNew [GH-13833] * provider/logentries: Refresh from state when resources not found [GH-13810] * provider/newrelic: newrelic_alert_condition - `condition_scope` must be `application` or `instance` [GH-12972] * provider/opc: fixed issue with unqualifying nats [GH-13826] + * provider/opc: Fix instance label if unset [GH-13846] * provider/openstack: Fix updating Ports [GH-13604] * provider/rabbitmq: Allow users without tags [GH-13798] diff --git a/backend/local/backend.go b/backend/local/backend.go index 063766b1e..7c715d67a 100644 --- a/backend/local/backend.go +++ b/backend/local/backend.go @@ -170,9 +170,30 @@ func (b *Local) DeleteState(name string) error { } func (b *Local) State(name string) (state.State, error) { + statePath, stateOutPath, backupPath := b.StatePaths(name) + // If we have a backend handling state, defer to that. if b.Backend != nil { - return b.Backend.State(name) + s, err := b.Backend.State(name) + if err != nil { + return nil, err + } + + // make sure we always have a backup state, unless it disabled + if backupPath == "" { + return s, nil + } + + // see if the delegated backend returned a BackupState of its own + if s, ok := s.(*state.BackupState); ok { + return s, nil + } + + s = &state.BackupState{ + Real: s, + Path: backupPath, + } + return s, nil } if s, ok := b.states[name]; ok { @@ -183,8 +204,6 @@ func (b *Local) State(name string) (state.State, error) { return nil, err } - statePath, stateOutPath, backupPath := b.StatePaths(name) - // Otherwise, we need to load the state. var s state.State = &state.LocalState{ Path: statePath, diff --git a/backend/local/backend_test.go b/backend/local/backend_test.go index 3b5f1f9bd..a32cbc7d7 100644 --- a/backend/local/backend_test.go +++ b/backend/local/backend_test.go @@ -169,6 +169,11 @@ func TestLocal_addAndRemoveStates(t *testing.T) { // verify it's being called. type testDelegateBackend struct { *Local + + // return a sentinel error on these calls + stateErr bool + statesErr bool + deleteErr bool } var errTestDelegateState = errors.New("State called") @@ -176,22 +181,39 @@ var errTestDelegateStates = errors.New("States called") var errTestDelegateDeleteState = errors.New("Delete called") func (b *testDelegateBackend) State(name string) (state.State, error) { - return nil, errTestDelegateState + if b.stateErr { + return nil, errTestDelegateState + } + s := &state.LocalState{ + Path: "terraform.tfstate", + PathOut: "terraform.tfstate", + } + return s, nil } func (b *testDelegateBackend) States() ([]string, error) { - return nil, errTestDelegateStates + if b.statesErr { + return nil, errTestDelegateStates + } + return []string{"default"}, nil } func (b *testDelegateBackend) DeleteState(name string) error { - return errTestDelegateDeleteState + if b.deleteErr { + return errTestDelegateDeleteState + } + return nil } // verify that the MultiState methods are dispatched to the correct Backend. func TestLocal_multiStateBackend(t *testing.T) { // assign a separate backend where we can read the state b := &Local{ - Backend: &testDelegateBackend{}, + Backend: &testDelegateBackend{ + stateErr: true, + statesErr: true, + deleteErr: true, + }, } if _, err := b.State("test"); err != errTestDelegateState { @@ -205,7 +227,43 @@ func TestLocal_multiStateBackend(t *testing.T) { if err := b.DeleteState("test"); err != errTestDelegateDeleteState { t.Fatal("expected errTestDelegateDeleteState, got:", err) } +} +// verify that a remote state backend is always wrapped in a BackupState +func TestLocal_remoteStateBackup(t *testing.T) { + // assign a separate backend to mock a remote state backend + b := &Local{ + Backend: &testDelegateBackend{}, + } + + s, err := b.State("default") + if err != nil { + t.Fatal(err) + } + + bs, ok := s.(*state.BackupState) + if !ok { + t.Fatal("remote state is not backed up") + } + + if bs.Path != DefaultStateFilename+DefaultBackupExtension { + t.Fatal("bad backup location:", bs.Path) + } + + // do the same with a named state, which should use the local env directories + s, err = b.State("test") + if err != nil { + t.Fatal(err) + } + + bs, ok = s.(*state.BackupState) + if !ok { + t.Fatal("remote state is not backed up") + } + + if bs.Path != filepath.Join(DefaultEnvDir, "test", DefaultStateFilename+DefaultBackupExtension) { + t.Fatal("bad backup location:", bs.Path) + } } // change into a tmp dir and return a deferable func to change back and cleanup diff --git a/builtin/bins/provider-localfile/main.go b/builtin/bins/provider-localfile/main.go index 4a98ecfdd..70494016f 100644 --- a/builtin/bins/provider-localfile/main.go +++ b/builtin/bins/provider-localfile/main.go @@ -1,12 +1,12 @@ package main import ( - "github.com/hashicorp/terraform/builtin/providers/localfile" + "github.com/hashicorp/terraform/builtin/providers/local" "github.com/hashicorp/terraform/plugin" ) func main() { plugin.Serve(&plugin.ServeOpts{ - ProviderFunc: localfile.Provider, + ProviderFunc: local.Provider, }) } diff --git a/builtin/providers/aws/config.go b/builtin/providers/aws/config.go index 17105d259..78fa93deb 100644 --- a/builtin/providers/aws/config.go +++ b/builtin/providers/aws/config.go @@ -28,6 +28,7 @@ import ( "github.com/aws/aws-sdk-go/service/codecommit" "github.com/aws/aws-sdk-go/service/codedeploy" "github.com/aws/aws-sdk-go/service/codepipeline" + "github.com/aws/aws-sdk-go/service/cognitoidentity" "github.com/aws/aws-sdk-go/service/configservice" "github.com/aws/aws-sdk-go/service/databasemigrationservice" "github.com/aws/aws-sdk-go/service/directoryservice" @@ -111,6 +112,7 @@ type AWSClient struct { cloudwatchconn *cloudwatch.CloudWatch cloudwatchlogsconn *cloudwatchlogs.CloudWatchLogs cloudwatcheventsconn *cloudwatchevents.CloudWatchEvents + cognitoconn *cognitoidentity.CognitoIdentity configconn *configservice.ConfigService dmsconn *databasemigrationservice.DatabaseMigrationService dsconn *directoryservice.DirectoryService @@ -306,6 +308,7 @@ func (c *Config) Client() (interface{}, error) { client.codebuildconn = codebuild.New(sess) client.codedeployconn = codedeploy.New(sess) client.configconn = configservice.New(sess) + client.cognitoconn = cognitoidentity.New(sess) client.dmsconn = databasemigrationservice.New(sess) client.codepipelineconn = codepipeline.New(sess) client.dsconn = directoryservice.New(sess) diff --git a/builtin/providers/aws/data_source_aws_ami_ids.go b/builtin/providers/aws/data_source_aws_ami_ids.go new file mode 100644 index 000000000..bbf4438d5 --- /dev/null +++ b/builtin/providers/aws/data_source_aws_ami_ids.go @@ -0,0 +1,112 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsAmiIds() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsAmiIdsRead, + + Schema: map[string]*schema.Schema{ + "filter": dataSourceFiltersSchema(), + "executable_users": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "name_regex": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validateNameRegex, + }, + "owners": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "tags": dataSourceTagsSchema(), + "ids": &schema.Schema{ + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsAmiIdsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + executableUsers, executableUsersOk := d.GetOk("executable_users") + filters, filtersOk := d.GetOk("filter") + nameRegex, nameRegexOk := d.GetOk("name_regex") + owners, ownersOk := d.GetOk("owners") + + if executableUsersOk == false && filtersOk == false && nameRegexOk == false && ownersOk == false { + return fmt.Errorf("One of executable_users, filters, name_regex, or owners must be assigned") + } + + params := &ec2.DescribeImagesInput{} + + if executableUsersOk { + params.ExecutableUsers = expandStringList(executableUsers.([]interface{})) + } + if filtersOk { + params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set)) + } + if ownersOk { + o := expandStringList(owners.([]interface{})) + + if len(o) > 0 { + params.Owners = o + } + } + + resp, err := conn.DescribeImages(params) + if err != nil { + return err + } + + var filteredImages []*ec2.Image + imageIds := make([]string, 0) + + if nameRegexOk { + r := regexp.MustCompile(nameRegex.(string)) + for _, image := range resp.Images { + // Check for a very rare case where the response would include no + // image name. No name means nothing to attempt a match against, + // therefore we are skipping such image. + if image.Name == nil || *image.Name == "" { + log.Printf("[WARN] Unable to find AMI name to match against "+ + "for image ID %q owned by %q, nothing to do.", + *image.ImageId, *image.OwnerId) + continue + } + if r.MatchString(*image.Name) { + filteredImages = append(filteredImages, image) + } + } + } else { + filteredImages = resp.Images[:] + } + + for _, image := range filteredImages { + imageIds = append(imageIds, *image.ImageId) + } + + d.SetId(fmt.Sprintf("%d", hashcode.String(params.String()))) + d.Set("ids", imageIds) + + return nil +} diff --git a/builtin/providers/aws/data_source_aws_ami_ids_test.go b/builtin/providers/aws/data_source_aws_ami_ids_test.go new file mode 100644 index 000000000..e2a7ac2d8 --- /dev/null +++ b/builtin/providers/aws/data_source_aws_ami_ids_test.go @@ -0,0 +1,58 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsAmiIds_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsAmiIdsConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.ubuntu"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsAmiIds_empty(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsAmiIdsConfig_empty, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAmiDataSourceID("data.aws_ami_ids.empty"), + resource.TestCheckResourceAttr("data.aws_ami_ids.empty", "ids.#", "0"), + ), + }, + }, + }) +} + +const testAccDataSourceAwsAmiIdsConfig_basic = ` +data "aws_ami_ids" "ubuntu" { + owners = ["099720109477"] + + filter { + name = "name" + values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"] + } +} +` + +const testAccDataSourceAwsAmiIdsConfig_empty = ` +data "aws_ami_ids" "empty" { + filter { + name = "name" + values = [] + } +} +` diff --git a/builtin/providers/aws/data_source_aws_db_instance.go b/builtin/providers/aws/data_source_aws_db_instance.go index 8adec4127..753319a84 100644 --- a/builtin/providers/aws/data_source_aws_db_instance.go +++ b/builtin/providers/aws/data_source_aws_db_instance.go @@ -188,6 +188,11 @@ func dataSourceAwsDbInstance() *schema.Resource { Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + + "replicate_source_db": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -271,6 +276,7 @@ func dataSourceAwsDbInstanceRead(d *schema.ResourceData, meta interface{}) error d.Set("storage_encrypted", dbInstance.StorageEncrypted) d.Set("storage_type", dbInstance.StorageType) d.Set("timezone", dbInstance.Timezone) + d.Set("replicate_source_db", dbInstance.ReadReplicaSourceDBInstanceIdentifier) var vpcSecurityGroups []string for _, v := range dbInstance.VpcSecurityGroups { diff --git a/builtin/providers/aws/data_source_aws_ebs_snapshot_ids.go b/builtin/providers/aws/data_source_aws_ebs_snapshot_ids.go new file mode 100644 index 000000000..57dc20e9c --- /dev/null +++ b/builtin/providers/aws/data_source_aws_ebs_snapshot_ids.go @@ -0,0 +1,78 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/hashcode" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceAwsEbsSnapshotIds() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsEbsSnapshotIdsRead, + + Schema: map[string]*schema.Schema{ + "filter": dataSourceFiltersSchema(), + "owners": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "restorable_by_user_ids": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "tags": dataSourceTagsSchema(), + "ids": &schema.Schema{ + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + }, + } +} + +func dataSourceAwsEbsSnapshotIdsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + restorableUsers, restorableUsersOk := d.GetOk("restorable_by_user_ids") + filters, filtersOk := d.GetOk("filter") + owners, ownersOk := d.GetOk("owners") + + if restorableUsers == false && filtersOk == false && ownersOk == false { + return fmt.Errorf("One of filters, restorable_by_user_ids, or owners must be assigned") + } + + params := &ec2.DescribeSnapshotsInput{} + + if restorableUsersOk { + params.RestorableByUserIds = expandStringList(restorableUsers.([]interface{})) + } + if filtersOk { + params.Filters = buildAwsDataSourceFilters(filters.(*schema.Set)) + } + if ownersOk { + params.OwnerIds = expandStringList(owners.([]interface{})) + } + + resp, err := conn.DescribeSnapshots(params) + if err != nil { + return err + } + + snapshotIds := make([]string, 0) + + for _, snapshot := range resp.Snapshots { + snapshotIds = append(snapshotIds, *snapshot.SnapshotId) + } + + d.SetId(fmt.Sprintf("%d", hashcode.String(params.String()))) + d.Set("ids", snapshotIds) + + return nil +} diff --git a/builtin/providers/aws/data_source_aws_ebs_snapshot_ids_test.go b/builtin/providers/aws/data_source_aws_ebs_snapshot_ids_test.go new file mode 100644 index 000000000..869152ac4 --- /dev/null +++ b/builtin/providers/aws/data_source_aws_ebs_snapshot_ids_test.go @@ -0,0 +1,59 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccDataSourceAwsEbsSnapshotIds_basic(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsEbsSnapshotIdsConfig_basic, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.test"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsEbsSnapshotIds_empty(t *testing.T) { + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsEbsSnapshotIdsConfig_empty, + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsEbsSnapshotDataSourceID("data.aws_ebs_snapshot_ids.empty"), + resource.TestCheckResourceAttr("data.aws_ebs_snapshot_ids.empty", "ids.#", "0"), + ), + }, + }, + }) +} + +const testAccDataSourceAwsEbsSnapshotIdsConfig_basic = ` +resource "aws_ebs_volume" "test" { + availability_zone = "us-west-2a" + size = 40 +} + +resource "aws_ebs_snapshot" "test" { + volume_id = "${aws_ebs_volume.test.id}" +} + +data "aws_ebs_snapshot_ids" "test" { + owners = ["self"] +} +` + +const testAccDataSourceAwsEbsSnapshotIdsConfig_empty = ` +data "aws_ebs_snapshot_ids" "empty" { + owners = ["000000000000"] +} +` diff --git a/builtin/providers/aws/data_source_aws_ebs_snapshot_test.go b/builtin/providers/aws/data_source_aws_ebs_snapshot_test.go index 682e758cc..58a20165a 100644 --- a/builtin/providers/aws/data_source_aws_ebs_snapshot_test.go +++ b/builtin/providers/aws/data_source_aws_ebs_snapshot_test.go @@ -44,7 +44,7 @@ func testAccCheckAwsEbsSnapshotDataSourceID(n string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Can't find Volume data source: %s", n) + return fmt.Errorf("Can't find snapshot data source: %s", n) } if rs.Primary.ID == "" { diff --git a/builtin/providers/aws/data_source_aws_subnet.go b/builtin/providers/aws/data_source_aws_subnet.go index ddd178a30..188a09dd2 100644 --- a/builtin/providers/aws/data_source_aws_subnet.go +++ b/builtin/providers/aws/data_source_aws_subnet.go @@ -14,19 +14,25 @@ func dataSourceAwsSubnet() *schema.Resource { Read: dataSourceAwsSubnetRead, Schema: map[string]*schema.Schema{ - "availability_zone": &schema.Schema{ + "availability_zone": { Type: schema.TypeString, Optional: true, Computed: true, }, - "cidr_block": &schema.Schema{ + "cidr_block": { Type: schema.TypeString, Optional: true, Computed: true, }, - "default_for_az": &schema.Schema{ + "ipv6_cidr_block": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + + "default_for_az": { Type: schema.TypeBool, Optional: true, Computed: true, @@ -34,13 +40,13 @@ func dataSourceAwsSubnet() *schema.Resource { "filter": ec2CustomFiltersSchema(), - "id": &schema.Schema{ + "id": { Type: schema.TypeString, Optional: true, Computed: true, }, - "state": &schema.Schema{ + "state": { Type: schema.TypeString, Optional: true, Computed: true, @@ -48,11 +54,26 @@ func dataSourceAwsSubnet() *schema.Resource { "tags": tagsSchemaComputed(), - "vpc_id": &schema.Schema{ + "vpc_id": { Type: schema.TypeString, Optional: true, Computed: true, }, + + "assign_ipv6_address_on_creation": { + Type: schema.TypeBool, + Computed: true, + }, + + "map_public_ip_on_launch": { + Type: schema.TypeBool, + Computed: true, + }, + + "ipv6_cidr_block_association_id": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -76,15 +97,22 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { defaultForAzStr = "true" } - req.Filters = buildEC2AttributeFilterList( - map[string]string{ - "availabilityZone": d.Get("availability_zone").(string), - "cidrBlock": d.Get("cidr_block").(string), - "defaultForAz": defaultForAzStr, - "state": d.Get("state").(string), - "vpc-id": d.Get("vpc_id").(string), - }, - ) + filters := map[string]string{ + "availabilityZone": d.Get("availability_zone").(string), + "defaultForAz": defaultForAzStr, + "state": d.Get("state").(string), + "vpc-id": d.Get("vpc_id").(string), + } + + if v, ok := d.GetOk("cidr_block"); ok { + filters["cidrBlock"] = v.(string) + } + + if v, ok := d.GetOk("ipv6_cidr_block"); ok { + filters["ipv6-cidr-block-association.ipv6-cidr-block"] = v.(string) + } + + req.Filters = buildEC2AttributeFilterList(filters) req.Filters = append(req.Filters, buildEC2TagFilterList( tagsFromMap(d.Get("tags").(map[string]interface{})), )...) @@ -118,6 +146,15 @@ func dataSourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { d.Set("default_for_az", subnet.DefaultForAz) d.Set("state", subnet.State) d.Set("tags", tagsToMap(subnet.Tags)) + d.Set("assign_ipv6_address_on_creation", subnet.AssignIpv6AddressOnCreation) + d.Set("map_public_ip_on_launch", subnet.MapPublicIpOnLaunch) + + for _, a := range subnet.Ipv6CidrBlockAssociationSet { + if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once + d.Set("ipv6_cidr_block_association_id", a.AssociationId) + d.Set("ipv6_cidr_block", a.Ipv6CidrBlock) + } + } return nil } diff --git a/builtin/providers/aws/data_source_aws_subnet_test.go b/builtin/providers/aws/data_source_aws_subnet_test.go index c6234ac39..3c9c5ed6f 100644 --- a/builtin/providers/aws/data_source_aws_subnet_test.go +++ b/builtin/providers/aws/data_source_aws_subnet_test.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccDataSourceAwsSubnet_basic(t *testing.T) { +func TestAccDataSourceAwsSubnet(t *testing.T) { rInt := acctest.RandIntRange(0, 256) resource.Test(t, resource.TestCase{ @@ -17,7 +17,7 @@ func TestAccDataSourceAwsSubnet_basic(t *testing.T) { Providers: testAccProviders, CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ - resource.TestStep{ + { Config: testAccDataSourceAwsSubnetConfig(rInt), Check: resource.ComposeTestCheckFunc( testAccDataSourceAwsSubnetCheck("data.aws_subnet.by_id", rInt), @@ -31,6 +31,48 @@ func TestAccDataSourceAwsSubnet_basic(t *testing.T) { }) } +func TestAccDataSourceAwsSubnetIpv6ByIpv6Filter(t *testing.T) { + rInt := acctest.RandIntRange(0, 256) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSubnetConfigIpv6(rInt), + }, + { + Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet( + "data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"), + resource.TestCheckResourceAttrSet( + "data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block"), + ), + }, + }, + }) +} + +func TestAccDataSourceAwsSubnetIpv6ByIpv6CidrBlock(t *testing.T) { + rInt := acctest.RandIntRange(0, 256) + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSubnetConfigIpv6(rInt), + }, + { + Config: testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrSet( + "data.aws_subnet.by_ipv6_cidr", "ipv6_cidr_block_association_id"), + ), + }, + }, + }) +} + func testAccDataSourceAwsSubnetCheck(name string, rInt int) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[name] @@ -103,6 +145,7 @@ func testAccDataSourceAwsSubnetConfig(rInt int) string { } } + data "aws_subnet" "by_id" { id = "${aws_subnet.test.id}" } @@ -129,3 +172,86 @@ func testAccDataSourceAwsSubnetConfig(rInt int) string { } `, rInt, rInt, rInt) } + +func testAccDataSourceAwsSubnetConfigIpv6(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.%d.0.0/16" + assign_generated_ipv6_cidr_block = true + + tags { + Name = "terraform-testacc-subnet-data-source-ipv6" + } +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.123.0/24" + availability_zone = "us-west-2a" + ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}" + + tags { + Name = "terraform-testacc-subnet-data-sourceipv6-%d" + } +} +`, rInt, rInt, rInt) +} + +func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceFilter(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.%d.0.0/16" + assign_generated_ipv6_cidr_block = true + + tags { + Name = "terraform-testacc-subnet-data-source-ipv6" + } +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.123.0/24" + availability_zone = "us-west-2a" + ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}" + + tags { + Name = "terraform-testacc-subnet-data-sourceipv6-%d" + } +} + +data "aws_subnet" "by_ipv6_cidr" { + filter { + name = "ipv6-cidr-block-association.ipv6-cidr-block" + values = ["${aws_subnet.test.ipv6_cidr_block}"] + } +} +`, rInt, rInt, rInt) +} + +func testAccDataSourceAwsSubnetConfigIpv6WithDataSourceIpv6CidrBlock(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "172.%d.0.0/16" + assign_generated_ipv6_cidr_block = true + + tags { + Name = "terraform-testacc-subnet-data-source-ipv6" + } +} + +resource "aws_subnet" "test" { + vpc_id = "${aws_vpc.test.id}" + cidr_block = "172.%d.123.0/24" + availability_zone = "us-west-2a" + ipv6_cidr_block = "${cidrsubnet(aws_vpc.test.ipv6_cidr_block, 8, 1)}" + + tags { + Name = "terraform-testacc-subnet-data-sourceipv6-%d" + } +} + +data "aws_subnet" "by_ipv6_cidr" { + ipv6_cidr_block = "${aws_subnet.test.ipv6_cidr_block}" +} +`, rInt, rInt, rInt) +} diff --git a/builtin/providers/aws/ec2_filters.go b/builtin/providers/aws/ec2_filters.go index 4263d6efa..743d28224 100644 --- a/builtin/providers/aws/ec2_filters.go +++ b/builtin/providers/aws/ec2_filters.go @@ -111,11 +111,11 @@ func ec2CustomFiltersSchema() *schema.Schema { Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "name": &schema.Schema{ + "name": { Type: schema.TypeString, Required: true, }, - "values": &schema.Schema{ + "values": { Type: schema.TypeSet, Required: true, Elem: &schema.Schema{ diff --git a/builtin/providers/aws/import_aws_cognito_identity_pool_test.go b/builtin/providers/aws/import_aws_cognito_identity_pool_test.go new file mode 100644 index 000000000..bdd2caec8 --- /dev/null +++ b/builtin/providers/aws/import_aws_cognito_identity_pool_test.go @@ -0,0 +1,30 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSCognitoIdentityPool_importBasic(t *testing.T) { + resourceName := "aws_cognito_identity_pool.main" + rName := acctest.RandString(10) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAPIGatewayAccountDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(rName), + }, + + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} diff --git a/builtin/providers/aws/import_aws_security_group.go b/builtin/providers/aws/import_aws_security_group.go index d802c75e2..d76529058 100644 --- a/builtin/providers/aws/import_aws_security_group.go +++ b/builtin/providers/aws/import_aws_security_group.go @@ -50,36 +50,67 @@ func resourceAwsSecurityGroupImportState( } func resourceAwsSecurityGroupImportStatePerm(sg *ec2.SecurityGroup, ruleType string, perm *ec2.IpPermission) ([]*schema.ResourceData, error) { + /* + Create a seperate Security Group Rule for: + * The collection of IpRanges (cidr_blocks) + * The collection of Ipv6Ranges (ipv6_cidr_blocks) + * Each individual UserIdGroupPair (source_security_group_id) + + If, for example, a security group has rules for: + * 2 IpRanges + * 2 Ipv6Ranges + * 2 UserIdGroupPairs + + This would generate 4 security group rules: + * 1 for the collection of IpRanges + * 1 for the collection of Ipv6Ranges + * 1 for the first UserIdGroupPair + * 1 for the second UserIdGroupPair + */ var result []*schema.ResourceData - if len(perm.UserIdGroupPairs) == 0 { - r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, perm) + if perm.IpRanges != nil { + p := &ec2.IpPermission{ + FromPort: perm.FromPort, + IpProtocol: perm.IpProtocol, + PrefixListIds: perm.PrefixListIds, + ToPort: perm.ToPort, + IpRanges: perm.IpRanges, + } + + r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p) if err != nil { return nil, err } result = append(result, r) - } else { - // If the rule contained more than one source security group, this - // will iterate over them and create one rule for each - // source security group. + } + + if perm.Ipv6Ranges != nil { + p := &ec2.IpPermission{ + FromPort: perm.FromPort, + IpProtocol: perm.IpProtocol, + PrefixListIds: perm.PrefixListIds, + ToPort: perm.ToPort, + Ipv6Ranges: perm.Ipv6Ranges, + } + + r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p) + if err != nil { + return nil, err + } + result = append(result, r) + } + + if len(perm.UserIdGroupPairs) > 0 { for _, pair := range perm.UserIdGroupPairs { p := &ec2.IpPermission{ - FromPort: perm.FromPort, - IpProtocol: perm.IpProtocol, - PrefixListIds: perm.PrefixListIds, - ToPort: perm.ToPort, - + FromPort: perm.FromPort, + IpProtocol: perm.IpProtocol, + PrefixListIds: perm.PrefixListIds, + ToPort: perm.ToPort, UserIdGroupPairs: []*ec2.UserIdGroupPair{pair}, } - if perm.Ipv6Ranges != nil { - p.Ipv6Ranges = perm.Ipv6Ranges - } - - if perm.IpRanges != nil { - p.IpRanges = perm.IpRanges - } - r, err := resourceAwsSecurityGroupImportStatePermPair(sg, ruleType, p) if err != nil { return nil, err diff --git a/builtin/providers/aws/import_aws_security_group_test.go b/builtin/providers/aws/import_aws_security_group_test.go index 4b0597670..d91b1027a 100644 --- a/builtin/providers/aws/import_aws_security_group_test.go +++ b/builtin/providers/aws/import_aws_security_group_test.go @@ -101,3 +101,59 @@ func TestAccAWSSecurityGroup_importSourceSecurityGroup(t *testing.T) { }, }) } + +func TestAccAWSSecurityGroup_importIPRangeAndSecurityGroupWithSameRules(t *testing.T) { + checkFn := func(s []*terraform.InstanceState) error { + // Expect 4: group, 3 rules + if len(s) != 4 { + return fmt.Errorf("expected 4 states: %#v", s) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupConfig_importIPRangeAndSecurityGroupWithSameRules, + }, + + { + ResourceName: "aws_security_group.test_group_1", + ImportState: true, + ImportStateCheck: checkFn, + }, + }, + }) +} + +func TestAccAWSSecurityGroup_importIPRangesWithSameRules(t *testing.T) { + checkFn := func(s []*terraform.InstanceState) error { + // Expect 4: group, 2 rules + if len(s) != 3 { + return fmt.Errorf("expected 3 states: %#v", s) + } + + return nil + } + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSecurityGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSecurityGroupConfig_importIPRangesWithSameRules, + }, + + { + ResourceName: "aws_security_group.test_group_1", + ImportState: true, + ImportStateCheck: checkFn, + }, + }, + }) +} diff --git a/builtin/providers/aws/provider.go b/builtin/providers/aws/provider.go index 889d9afcf..6f847fb26 100644 --- a/builtin/providers/aws/provider.go +++ b/builtin/providers/aws/provider.go @@ -163,6 +163,7 @@ func Provider() terraform.ResourceProvider { "aws_alb": dataSourceAwsAlb(), "aws_alb_listener": dataSourceAwsAlbListener(), "aws_ami": dataSourceAwsAmi(), + "aws_ami_ids": dataSourceAwsAmiIds(), "aws_autoscaling_groups": dataSourceAwsAutoscalingGroups(), "aws_availability_zone": dataSourceAwsAvailabilityZone(), "aws_availability_zones": dataSourceAwsAvailabilityZones(), @@ -172,6 +173,7 @@ func Provider() terraform.ResourceProvider { "aws_cloudformation_stack": dataSourceAwsCloudFormationStack(), "aws_db_instance": dataSourceAwsDbInstance(), "aws_ebs_snapshot": dataSourceAwsEbsSnapshot(), + "aws_ebs_snapshot_ids": dataSourceAwsEbsSnapshotIds(), "aws_ebs_volume": dataSourceAwsEbsVolume(), "aws_ecs_cluster": dataSourceAwsEcsCluster(), "aws_ecs_container_definition": dataSourceAwsEcsContainerDefinition(), @@ -258,6 +260,7 @@ func Provider() terraform.ResourceProvider { "aws_config_configuration_recorder": resourceAwsConfigConfigurationRecorder(), "aws_config_configuration_recorder_status": resourceAwsConfigConfigurationRecorderStatus(), "aws_config_delivery_channel": resourceAwsConfigDeliveryChannel(), + "aws_cognito_identity_pool": resourceAwsCognitoIdentityPool(), "aws_autoscaling_lifecycle_hook": resourceAwsAutoscalingLifecycleHook(), "aws_cloudwatch_metric_alarm": resourceAwsCloudWatchMetricAlarm(), "aws_codedeploy_app": resourceAwsCodeDeployApp(), @@ -365,6 +368,7 @@ func Provider() terraform.ResourceProvider { "aws_default_route_table": resourceAwsDefaultRouteTable(), "aws_network_acl_rule": resourceAwsNetworkAclRule(), "aws_network_interface": resourceAwsNetworkInterface(), + "aws_network_interface_attachment": resourceAwsNetworkInterfaceAttachment(), "aws_opsworks_application": resourceAwsOpsworksApplication(), "aws_opsworks_stack": resourceAwsOpsworksStack(), "aws_opsworks_java_app_layer": resourceAwsOpsworksJavaAppLayer(), diff --git a/builtin/providers/aws/resource_aws_api_gateway_deployment.go b/builtin/providers/aws/resource_aws_api_gateway_deployment.go index 494776288..f4c1daf20 100644 --- a/builtin/providers/aws/resource_aws_api_gateway_deployment.go +++ b/builtin/providers/aws/resource_aws_api_gateway_deployment.go @@ -54,6 +54,16 @@ func resourceAwsApiGatewayDeployment() *schema.Resource { Type: schema.TypeString, Computed: true, }, + + "invoke_url": { + Type: schema.TypeString, + Computed: true, + }, + + "execution_arn": { + Type: schema.TypeString, + Computed: true, + }, }, } } @@ -90,8 +100,9 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{ conn := meta.(*AWSClient).apigateway log.Printf("[DEBUG] Reading API Gateway Deployment %s", d.Id()) + restApiId := d.Get("rest_api_id").(string) out, err := conn.GetDeployment(&apigateway.GetDeploymentInput{ - RestApiId: aws.String(d.Get("rest_api_id").(string)), + RestApiId: aws.String(restApiId), DeploymentId: aws.String(d.Id()), }) if err != nil { @@ -104,6 +115,18 @@ func resourceAwsApiGatewayDeploymentRead(d *schema.ResourceData, meta interface{ log.Printf("[DEBUG] Received API Gateway Deployment: %s", out) d.Set("description", out.Description) + region := meta.(*AWSClient).region + stageName := d.Get("stage_name").(string) + + d.Set("invoke_url", buildApiGatewayInvokeURL(restApiId, region, stageName)) + + accountId := meta.(*AWSClient).accountid + arn, err := buildApiGatewayExecutionARN(restApiId, region, accountId) + if err != nil { + return err + } + d.Set("execution_arn", arn+"/"+stageName) + if err := d.Set("created_date", out.CreatedDate.Format(time.RFC3339)); err != nil { log.Printf("[DEBUG] Error setting created_date: %s", err) } diff --git a/builtin/providers/aws/resource_aws_cognito_identity_pool.go b/builtin/providers/aws/resource_aws_cognito_identity_pool.go new file mode 100644 index 000000000..b85472cf9 --- /dev/null +++ b/builtin/providers/aws/resource_aws_cognito_identity_pool.go @@ -0,0 +1,238 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/cognitoidentity" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsCognitoIdentityPool() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsCognitoIdentityPoolCreate, + Read: resourceAwsCognitoIdentityPoolRead, + Update: resourceAwsCognitoIdentityPoolUpdate, + Delete: resourceAwsCognitoIdentityPoolDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "identity_pool_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateCognitoIdentityPoolName, + }, + + "cognito_identity_providers": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "client_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateCognitoIdentityProvidersClientId, + }, + "provider_name": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateCognitoIdentityProvidersProviderName, + }, + "server_side_token_check": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + }, + }, + }, + + "developer_provider_name": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, // Forcing a new resource since it cannot be edited afterwards + ValidateFunc: validateCognitoProviderDeveloperName, + }, + + "allow_unauthenticated_identities": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + + "openid_connect_provider_arns": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateArn, + }, + }, + + "saml_provider_arns": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateArn, + }, + }, + + "supported_login_providers": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validateCognitoSupportedLoginProviders, + }, + }, + }, + } +} + +func resourceAwsCognitoIdentityPoolCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoconn + log.Print("[DEBUG] Creating Cognito Identity Pool") + + params := &cognitoidentity.CreateIdentityPoolInput{ + IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)), + AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)), + } + + if v, ok := d.GetOk("developer_provider_name"); ok { + params.DeveloperProviderName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("supported_login_providers"); ok { + params.SupportedLoginProviders = expandCognitoSupportedLoginProviders(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("cognito_identity_providers"); ok { + params.CognitoIdentityProviders = expandCognitoIdentityProviders(v.(*schema.Set)) + } + + if v, ok := d.GetOk("saml_provider_arns"); ok { + params.SamlProviderARNs = expandStringList(v.([]interface{})) + } + + if v, ok := d.GetOk("openid_connect_provider_arns"); ok { + params.OpenIdConnectProviderARNs = expandStringList(v.([]interface{})) + } + + entity, err := conn.CreateIdentityPool(params) + if err != nil { + return fmt.Errorf("Error creating Cognito Identity Pool: %s", err) + } + + d.SetId(*entity.IdentityPoolId) + + return resourceAwsCognitoIdentityPoolRead(d, meta) +} + +func resourceAwsCognitoIdentityPoolRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoconn + log.Printf("[DEBUG] Reading Cognito Identity Pool: %s", d.Id()) + + ip, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{ + IdentityPoolId: aws.String(d.Id()), + }) + if err != nil { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "ResourceNotFoundException" { + d.SetId("") + return nil + } + return err + } + + d.Set("identity_pool_name", ip.IdentityPoolName) + d.Set("allow_unauthenticated_identities", ip.AllowUnauthenticatedIdentities) + d.Set("developer_provider_name", ip.DeveloperProviderName) + + if ip.CognitoIdentityProviders != nil { + if err := d.Set("cognito_identity_providers", flattenCognitoIdentityProviders(ip.CognitoIdentityProviders)); err != nil { + return fmt.Errorf("[DEBUG] Error setting cognito_identity_providers error: %#v", err) + } + } + + if ip.OpenIdConnectProviderARNs != nil { + if err := d.Set("openid_connect_provider_arns", flattenStringList(ip.OpenIdConnectProviderARNs)); err != nil { + return fmt.Errorf("[DEBUG] Error setting openid_connect_provider_arns error: %#v", err) + } + } + + if ip.SamlProviderARNs != nil { + if err := d.Set("saml_provider_arns", flattenStringList(ip.SamlProviderARNs)); err != nil { + return fmt.Errorf("[DEBUG] Error setting saml_provider_arns error: %#v", err) + } + } + + if ip.SupportedLoginProviders != nil { + if err := d.Set("supported_login_providers", flattenCognitoSupportedLoginProviders(ip.SupportedLoginProviders)); err != nil { + return fmt.Errorf("[DEBUG] Error setting supported_login_providers error: %#v", err) + } + } + + return nil +} + +func resourceAwsCognitoIdentityPoolUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoconn + log.Print("[DEBUG] Updating Cognito Identity Pool") + + params := &cognitoidentity.IdentityPool{ + IdentityPoolId: aws.String(d.Id()), + AllowUnauthenticatedIdentities: aws.Bool(d.Get("allow_unauthenticated_identities").(bool)), + IdentityPoolName: aws.String(d.Get("identity_pool_name").(string)), + } + + if d.HasChange("developer_provider_name") { + params.DeveloperProviderName = aws.String(d.Get("developer_provider_name").(string)) + } + + if d.HasChange("cognito_identity_providers") { + params.CognitoIdentityProviders = expandCognitoIdentityProviders(d.Get("cognito_identity_providers").(*schema.Set)) + } + + if d.HasChange("supported_login_providers") { + params.SupportedLoginProviders = expandCognitoSupportedLoginProviders(d.Get("supported_login_providers").(map[string]interface{})) + } + + if d.HasChange("openid_connect_provider_arns") { + params.OpenIdConnectProviderARNs = expandStringList(d.Get("openid_connect_provider_arns").([]interface{})) + } + + if d.HasChange("saml_provider_arns") { + params.SamlProviderARNs = expandStringList(d.Get("saml_provider_arns").([]interface{})) + } + + _, err := conn.UpdateIdentityPool(params) + if err != nil { + return fmt.Errorf("Error creating Cognito Identity Pool: %s", err) + } + + return resourceAwsCognitoIdentityPoolRead(d, meta) +} + +func resourceAwsCognitoIdentityPoolDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).cognitoconn + log.Printf("[DEBUG] Deleting Cognito Identity Pool: %s", d.Id()) + + return resource.Retry(5*time.Minute, func() *resource.RetryError { + _, err := conn.DeleteIdentityPool(&cognitoidentity.DeleteIdentityPoolInput{ + IdentityPoolId: aws.String(d.Id()), + }) + + if err == nil { + return nil + } + + return resource.NonRetryableError(err) + }) +} diff --git a/builtin/providers/aws/resource_aws_cognito_identity_pool_test.go b/builtin/providers/aws/resource_aws_cognito_identity_pool_test.go new file mode 100644 index 000000000..6ee0b1955 --- /dev/null +++ b/builtin/providers/aws/resource_aws_cognito_identity_pool_test.go @@ -0,0 +1,371 @@ +package aws + +import ( + "errors" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/cognitoidentity" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccAWSCognitoIdentityPool_basic(t *testing.T) { + name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + updatedName := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "allow_unauthenticated_identities", "false"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(updatedName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", updatedName)), + ), + }, + }, + }) +} + +func TestAccAWSCognitoIdentityPool_supportedLoginProviders(t *testing.T) { + name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_supportedLoginProviders(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.graph.facebook.com", "7346241598935555"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_supportedLoginProvidersModified(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.graph.facebook.com", "7346241598935552"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "supported_login_providers.accounts.google.com", "123456789012.apps.googleusercontent.com"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + ), + }, + }, + }) +} + +func TestAccAWSCognitoIdentityPool_openidConnectProviderArns(t *testing.T) { + name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArns(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "openid_connect_provider_arns.#", "1"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArnsModified(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "openid_connect_provider_arns.#", "2"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + ), + }, + }, + }) +} + +func TestAccAWSCognitoIdentityPool_samlProviderArns(t *testing.T) { + name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_samlProviderArns(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#", "1"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_samlProviderArnsModified(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#", "1"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckNoResourceAttr("aws_cognito_identity_pool.main", "saml_provider_arns.#"), + ), + }, + }, + }) +} + +func TestAccAWSCognitoIdentityPool_cognitoIdentityProviders(t *testing.T) { + name := fmt.Sprintf("%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCognitoIdentityPoolDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProviders(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.client_id", "7lhlkkfbfb4q5kpp90urffao"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.66456389.server_side_token_check", "false"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.client_id", "7lhlkkfbfb4q5kpp90urffao"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Ab129faBb"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3571192419.server_side_token_check", "false"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProvidersModified(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.client_id", "6lhlkkfbfb4q5kpp90urffae"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.provider_name", "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "cognito_identity_providers.3661724441.server_side_token_check", "false"), + ), + }, + { + Config: testAccAWSCognitoIdentityPoolConfig_basic(name), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSCognitoIdentityPoolExists("aws_cognito_identity_pool.main"), + resource.TestCheckResourceAttr("aws_cognito_identity_pool.main", "identity_pool_name", fmt.Sprintf("identity pool %s", name)), + ), + }, + }, + }) +} + +func testAccCheckAWSCognitoIdentityPoolExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return errors.New("No Cognito Identity Pool ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).cognitoconn + + _, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{ + IdentityPoolId: aws.String(rs.Primary.ID), + }) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckAWSCognitoIdentityPoolDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).cognitoconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_cognito_identity_pool" { + continue + } + + _, err := conn.DescribeIdentityPool(&cognitoidentity.DescribeIdentityPoolInput{ + IdentityPoolId: aws.String(rs.Primary.ID), + }) + + if err != nil { + if wserr, ok := err.(awserr.Error); ok && wserr.Code() == "ResourceNotFoundException" { + return nil + } + return err + } + } + + return nil +} + +func testAccAWSCognitoIdentityPoolConfig_basic(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + developer_provider_name = "my.developer" +} +`, name) +} + +func testAccAWSCognitoIdentityPoolConfig_supportedLoginProviders(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + supported_login_providers { + "graph.facebook.com" = "7346241598935555" + } +} +`, name) +} + +func testAccAWSCognitoIdentityPoolConfig_supportedLoginProvidersModified(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + supported_login_providers { + "graph.facebook.com" = "7346241598935552" + "accounts.google.com" = "123456789012.apps.googleusercontent.com" + } +} +`, name) +} + +func testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArns(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/server.example.com"] +} +`, name) +} + +func testAccAWSCognitoIdentityPoolConfig_openidConnectProviderArnsModified(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/foo.example.com", "arn:aws:iam::123456789012:oidc-provider/bar.example.com"] +} +`, name) +} + +func testAccAWSCognitoIdentityPoolConfig_samlProviderArns(name string) string { + return fmt.Sprintf(` +resource "aws_iam_saml_provider" "default" { + name = "myprovider-%s" + saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}" +} + +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + saml_provider_arns = ["${aws_iam_saml_provider.default.arn}"] +} +`, name, name) +} + +func testAccAWSCognitoIdentityPoolConfig_samlProviderArnsModified(name string) string { + return fmt.Sprintf(` +resource "aws_iam_saml_provider" "default" { + name = "default-%s" + saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}" +} + +resource "aws_iam_saml_provider" "secondary" { + name = "secondary-%s" + saml_metadata_document = "${file("./test-fixtures/saml-metadata.xml")}" +} + +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + saml_provider_arns = ["${aws_iam_saml_provider.secondary.arn}"] +} +`, name, name, name) +} + +func testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProviders(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + cognito_identity_providers { + client_id = "7lhlkkfbfb4q5kpp90urffao" + provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Ab129faBb" + server_side_token_check = false + } + + cognito_identity_providers { + client_id = "7lhlkkfbfb4q5kpp90urffao" + provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu" + server_side_token_check = false + } +} +`, name) +} + +func testAccAWSCognitoIdentityPoolConfig_cognitoIdentityProvidersModified(name string) string { + return fmt.Sprintf(` +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool %s" + allow_unauthenticated_identities = false + + cognito_identity_providers { + client_id = "6lhlkkfbfb4q5kpp90urffae" + provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu" + server_side_token_check = false + } +} +`, name) +} diff --git a/builtin/providers/aws/resource_aws_ecs_service_test.go b/builtin/providers/aws/resource_aws_ecs_service_test.go index f622d64b7..c3a603547 100644 --- a/builtin/providers/aws/resource_aws_ecs_service_test.go +++ b/builtin/providers/aws/resource_aws_ecs_service_test.go @@ -85,20 +85,21 @@ func TestParseTaskDefinition(t *testing.T) { } func TestAccAWSEcsServiceWithARN(t *testing.T) { + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsService, + Config: testAccAWSEcsService(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), ), }, { - Config: testAccAWSEcsServiceModified, + Config: testAccAWSEcsServiceModified(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), ), @@ -181,13 +182,14 @@ func TestAccAWSEcsService_withIamRole(t *testing.T) { } func TestAccAWSEcsService_withDeploymentValues(t *testing.T) { + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsServiceWithDeploymentValues, + Config: testAccAWSEcsServiceWithDeploymentValues(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), resource.TestCheckResourceAttr( @@ -262,20 +264,21 @@ func TestAccAWSEcsService_withAlb(t *testing.T) { } func TestAccAWSEcsServiceWithPlacementStrategy(t *testing.T) { + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsService, + Config: testAccAWSEcsService(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_strategy.#", "0"), ), }, { - Config: testAccAWSEcsServiceWithPlacementStrategy, + Config: testAccAWSEcsServiceWithPlacementStrategy(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_strategy.#", "1"), @@ -286,13 +289,14 @@ func TestAccAWSEcsServiceWithPlacementStrategy(t *testing.T) { } func TestAccAWSEcsServiceWithPlacementConstraints(t *testing.T) { + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsServiceWithPlacementConstraint, + Config: testAccAWSEcsServiceWithPlacementConstraint(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_constraints.#", "1"), @@ -303,13 +307,14 @@ func TestAccAWSEcsServiceWithPlacementConstraints(t *testing.T) { } func TestAccAWSEcsServiceWithPlacementConstraints_emptyExpression(t *testing.T) { + rInt := acctest.RandInt() resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEcsServiceDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSEcsServiceWithPlacementConstraintEmptyExpression, + Config: testAccAWSEcsServiceWithPlacementConstraintEmptyExpression(rInt), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEcsServiceExists("aws_ecs_service.mongo"), resource.TestCheckResourceAttr("aws_ecs_service.mongo", "placement_constraints.#", "1"), @@ -366,9 +371,10 @@ func testAccCheckAWSEcsServiceExists(name string) resource.TestCheckFunc { } } -var testAccAWSEcsService = ` +func testAccAWSEcsService(rInt int) string { + return fmt.Sprintf(` resource "aws_ecs_cluster" "default" { - name = "terraformecstest1" + name = "terraformecstest%d" } resource "aws_ecs_task_definition" "mongo" { @@ -387,16 +393,18 @@ DEFINITION } resource "aws_ecs_service" "mongo" { - name = "mongodb" + name = "mongodb-%d" cluster = "${aws_ecs_cluster.default.id}" task_definition = "${aws_ecs_task_definition.mongo.arn}" desired_count = 1 } -` +`, rInt, rInt) +} -var testAccAWSEcsServiceModified = ` +func testAccAWSEcsServiceModified(rInt int) string { + return fmt.Sprintf(` resource "aws_ecs_cluster" "default" { - name = "terraformecstest1" + name = "terraformecstest%d" } resource "aws_ecs_task_definition" "mongo" { @@ -415,16 +423,18 @@ DEFINITION } resource "aws_ecs_service" "mongo" { - name = "mongodb" + name = "mongodb-%d" cluster = "${aws_ecs_cluster.default.id}" task_definition = "${aws_ecs_task_definition.mongo.arn}" desired_count = 2 } -` +`, rInt, rInt) +} -var testAccAWSEcsServiceWithPlacementStrategy = ` +func testAccAWSEcsServiceWithPlacementStrategy(rInt int) string { + return fmt.Sprintf(` resource "aws_ecs_cluster" "default" { - name = "terraformecstest1" + name = "terraformecstest%d" } resource "aws_ecs_task_definition" "mongo" { @@ -443,7 +453,7 @@ DEFINITION } resource "aws_ecs_service" "mongo" { - name = "mongodb" + name = "mongodb-%d" cluster = "${aws_ecs_cluster.default.id}" task_definition = "${aws_ecs_task_definition.mongo.arn}" desired_count = 1 @@ -452,43 +462,47 @@ resource "aws_ecs_service" "mongo" { field = "memory" } } -` +`, rInt, rInt) +} -var testAccAWSEcsServiceWithPlacementConstraint = ` +func testAccAWSEcsServiceWithPlacementConstraint(rInt int) string { + return fmt.Sprintf(` + resource "aws_ecs_cluster" "default" { + name = "terraformecstest%d" + } + + resource "aws_ecs_task_definition" "mongo" { + family = "mongodb" + container_definitions = < 0 { + input := &ec2.AssignPrivateIpAddressesInput{ + NetworkInterfaceId: aws.String(d.Id()), + SecondaryPrivateIpAddressCount: aws.Int64(int64(diff)), + } + _, err := conn.AssignPrivateIpAddresses(input) + if err != nil { + return fmt.Errorf("Failure to assign Private IPs: %s", err) + } + } + + if diff < 0 { + input := &ec2.UnassignPrivateIpAddressesInput{ + NetworkInterfaceId: aws.String(d.Id()), + PrivateIpAddresses: expandStringList(private_ips_filtered[0:int(math.Abs(float64(diff)))]), + } + _, err := conn.UnassignPrivateIpAddresses(input) + if err != nil { + return fmt.Errorf("Failure to unassign Private IPs: %s", err) + } + } + + d.SetPartial("private_ips_count") + } + } + if d.HasChange("security_groups") { request := &ec2.ModifyNetworkInterfaceAttributeInput{ NetworkInterfaceId: aws.String(d.Id()), diff --git a/builtin/providers/aws/resource_aws_network_interface_attachment.go b/builtin/providers/aws/resource_aws_network_interface_attachment.go new file mode 100644 index 000000000..c37b0d18f --- /dev/null +++ b/builtin/providers/aws/resource_aws_network_interface_attachment.go @@ -0,0 +1,166 @@ +package aws + +import ( + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/helper/schema" +) + +func resourceAwsNetworkInterfaceAttachment() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsNetworkInterfaceAttachmentCreate, + Read: resourceAwsNetworkInterfaceAttachmentRead, + Delete: resourceAwsNetworkInterfaceAttachmentDelete, + + Schema: map[string]*schema.Schema{ + "device_index": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + + "instance_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "network_interface_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "attachment_id": { + Type: schema.TypeString, + Computed: true, + }, + + "status": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsNetworkInterfaceAttachmentCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + device_index := d.Get("device_index").(int) + instance_id := d.Get("instance_id").(string) + network_interface_id := d.Get("network_interface_id").(string) + + opts := &ec2.AttachNetworkInterfaceInput{ + DeviceIndex: aws.Int64(int64(device_index)), + InstanceId: aws.String(instance_id), + NetworkInterfaceId: aws.String(network_interface_id), + } + + log.Printf("[DEBUG] Attaching network interface (%s) to instance (%s)", network_interface_id, instance_id) + resp, err := conn.AttachNetworkInterface(opts) + if err != nil { + if awsErr, ok := err.(awserr.Error); ok { + return fmt.Errorf("Error attaching network interface (%s) to instance (%s), message: \"%s\", code: \"%s\"", + network_interface_id, instance_id, awsErr.Message(), awsErr.Code()) + } + return err + } + + stateConf := &resource.StateChangeConf{ + Pending: []string{"false"}, + Target: []string{"true"}, + Refresh: networkInterfaceAttachmentRefreshFunc(conn, network_interface_id), + Timeout: 5 * time.Minute, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + _, err = stateConf.WaitForState() + if err != nil { + return fmt.Errorf( + "Error waiting for Volume (%s) to attach to Instance: %s, error: %s", network_interface_id, instance_id, err) + } + + d.SetId(*resp.AttachmentId) + return resourceAwsNetworkInterfaceAttachmentRead(d, meta) +} + +func resourceAwsNetworkInterfaceAttachmentRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + interfaceId := d.Get("network_interface_id").(string) + + req := &ec2.DescribeNetworkInterfacesInput{ + NetworkInterfaceIds: []*string{aws.String(interfaceId)}, + } + + resp, err := conn.DescribeNetworkInterfaces(req) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidNetworkInterfaceID.NotFound" { + // The ENI is gone now, so just remove the attachment from the state + d.SetId("") + return nil + } + + return fmt.Errorf("Error retrieving ENI: %s", err) + } + if len(resp.NetworkInterfaces) != 1 { + return fmt.Errorf("Unable to find ENI (%s): %#v", interfaceId, resp.NetworkInterfaces) + } + + eni := resp.NetworkInterfaces[0] + + if eni.Attachment == nil { + // Interface is no longer attached, remove from state + d.SetId("") + return nil + } + + d.Set("attachment_id", eni.Attachment.AttachmentId) + d.Set("device_index", eni.Attachment.DeviceIndex) + d.Set("instance_id", eni.Attachment.InstanceId) + d.Set("network_interface_id", eni.NetworkInterfaceId) + d.Set("status", eni.Attachment.Status) + + return nil +} + +func resourceAwsNetworkInterfaceAttachmentDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).ec2conn + + interfaceId := d.Get("network_interface_id").(string) + + detach_request := &ec2.DetachNetworkInterfaceInput{ + AttachmentId: aws.String(d.Id()), + Force: aws.Bool(true), + } + + _, detach_err := conn.DetachNetworkInterface(detach_request) + if detach_err != nil { + if awsErr, _ := detach_err.(awserr.Error); awsErr.Code() != "InvalidAttachmentID.NotFound" { + return fmt.Errorf("Error detaching ENI: %s", detach_err) + } + } + + log.Printf("[DEBUG] Waiting for ENI (%s) to become dettached", interfaceId) + stateConf := &resource.StateChangeConf{ + Pending: []string{"true"}, + Target: []string{"false"}, + Refresh: networkInterfaceAttachmentRefreshFunc(conn, interfaceId), + Timeout: 10 * time.Minute, + } + + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for ENI (%s) to become dettached: %s", interfaceId, err) + } + + return nil +} diff --git a/builtin/providers/aws/resource_aws_network_interface_attacment_test.go b/builtin/providers/aws/resource_aws_network_interface_attacment_test.go new file mode 100644 index 000000000..b6b1aa0eb --- /dev/null +++ b/builtin/providers/aws/resource_aws_network_interface_attacment_test.go @@ -0,0 +1,92 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccAWSNetworkInterfaceAttachment_basic(t *testing.T) { + var conf ec2.NetworkInterface + rInt := acctest.RandInt() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + IDRefreshName: "aws_network_interface.bar", + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSENIDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSNetworkInterfaceAttachmentConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSENIExists("aws_network_interface.bar", &conf), + resource.TestCheckResourceAttr( + "aws_network_interface_attachment.test", "device_index", "1"), + resource.TestCheckResourceAttrSet( + "aws_network_interface_attachment.test", "instance_id"), + resource.TestCheckResourceAttrSet( + "aws_network_interface_attachment.test", "network_interface_id"), + resource.TestCheckResourceAttrSet( + "aws_network_interface_attachment.test", "attachment_id"), + resource.TestCheckResourceAttrSet( + "aws_network_interface_attachment.test", "status"), + ), + }, + }, + }) +} + +func testAccAWSNetworkInterfaceAttachmentConfig_basic(rInt int) string { + return fmt.Sprintf(` +resource "aws_vpc" "foo" { + cidr_block = "172.16.0.0/16" +} + +resource "aws_subnet" "foo" { + vpc_id = "${aws_vpc.foo.id}" + cidr_block = "172.16.10.0/24" + availability_zone = "us-west-2a" +} + +resource "aws_security_group" "foo" { + vpc_id = "${aws_vpc.foo.id}" + description = "foo" + name = "foo-%d" + + egress { + from_port = 0 + to_port = 0 + protocol = "tcp" + cidr_blocks = ["10.0.0.0/16"] + } +} + +resource "aws_network_interface" "bar" { + subnet_id = "${aws_subnet.foo.id}" + private_ips = ["172.16.10.100"] + security_groups = ["${aws_security_group.foo.id}"] + description = "Managed by Terraform" + tags { + Name = "bar_interface" + } +} + +resource "aws_instance" "foo" { + ami = "ami-c5eabbf5" + instance_type = "t2.micro" + subnet_id = "${aws_subnet.foo.id}" + tags { + Name = "foo-%d" + } +} + +resource "aws_network_interface_attachment" "test" { + device_index = 1 + instance_id = "${aws_instance.foo.id}" + network_interface_id = "${aws_network_interface.bar.id}" +} +`, rInt, rInt) +} diff --git a/builtin/providers/aws/resource_aws_opsworks_instance.go b/builtin/providers/aws/resource_aws_opsworks_instance.go index b195ac902..ab7a7f471 100644 --- a/builtin/providers/aws/resource_aws_opsworks_instance.go +++ b/builtin/providers/aws/resource_aws_opsworks_instance.go @@ -111,6 +111,7 @@ func resourceAwsOpsworksInstance() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, + ForceNew: true, }, "infrastructure_class": { diff --git a/builtin/providers/aws/resource_aws_opsworks_instance_test.go b/builtin/providers/aws/resource_aws_opsworks_instance_test.go index ed5aaa5d8..1a2bbe0f6 100644 --- a/builtin/providers/aws/resource_aws_opsworks_instance_test.go +++ b/builtin/providers/aws/resource_aws_opsworks_instance_test.go @@ -108,6 +108,44 @@ func TestAccAWSOpsworksInstance(t *testing.T) { }) } +func TestAccAWSOpsworksInstance_UpdateHostNameForceNew(t *testing.T) { + stackName := fmt.Sprintf("tf-%d", acctest.RandInt()) + + var before, after opsworks.Instance + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsOpsworksInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsOpsworksInstanceConfigCreate(stackName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSOpsworksInstanceExists("aws_opsworks_instance.tf-acc", &before), + resource.TestCheckResourceAttr("aws_opsworks_instance.tf-acc", "hostname", "tf-acc1"), + ), + }, + { + Config: testAccAwsOpsworksInstanceConfigUpdateHostName(stackName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSOpsworksInstanceExists("aws_opsworks_instance.tf-acc", &after), + resource.TestCheckResourceAttr("aws_opsworks_instance.tf-acc", "hostname", "tf-acc2"), + testAccCheckAwsOpsworksInstanceRecreated(t, &before, &after), + ), + }, + }, + }) +} + +func testAccCheckAwsOpsworksInstanceRecreated(t *testing.T, + before, after *opsworks.Instance) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *before.InstanceId == *after.InstanceId { + t.Fatalf("Expected change of OpsWorks Instance IDs, but both were %s", *before.InstanceId) + } + return nil + } +} + func testAccCheckAWSOpsworksInstanceExists( n string, opsinst *opsworks.Instance) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -197,6 +235,59 @@ func testAccCheckAwsOpsworksInstanceDestroy(s *terraform.State) error { return fmt.Errorf("Fall through error on OpsWorks instance test") } +func testAccAwsOpsworksInstanceConfigUpdateHostName(name string) string { + return fmt.Sprintf(` +resource "aws_security_group" "tf-ops-acc-web" { + name = "%s-web" + ingress { + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_security_group" "tf-ops-acc-php" { + name = "%s-php" + ingress { + from_port = 8080 + to_port = 8080 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } +} + +resource "aws_opsworks_static_web_layer" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + + custom_security_group_ids = [ + "${aws_security_group.tf-ops-acc-web.id}", + ] +} + +resource "aws_opsworks_php_app_layer" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + + custom_security_group_ids = [ + "${aws_security_group.tf-ops-acc-php.id}", + ] +} + +resource "aws_opsworks_instance" "tf-acc" { + stack_id = "${aws_opsworks_stack.tf-acc.id}" + layer_ids = [ + "${aws_opsworks_static_web_layer.tf-acc.id}", + ] + instance_type = "t2.micro" + state = "stopped" + hostname = "tf-acc2" +} + +%s + +`, name, name, testAccAwsOpsworksStackConfigVpcCreate(name)) +} + func testAccAwsOpsworksInstanceConfigCreate(name string) string { return fmt.Sprintf(` resource "aws_security_group" "tf-ops-acc-web" { diff --git a/builtin/providers/aws/resource_aws_route53_zone.go b/builtin/providers/aws/resource_aws_route53_zone.go index 9faa716a1..b30d38829 100644 --- a/builtin/providers/aws/resource_aws_route53_zone.go +++ b/builtin/providers/aws/resource_aws_route53_zone.go @@ -300,7 +300,7 @@ func deleteAllRecordsInHostedZoneId(hostedZoneId, hostedZoneName string, conn *r changes := make([]*route53.Change, 0) // 100 items per page returned by default for _, set := range sets { - if *set.Name == hostedZoneName+"." && (*set.Type == "NS" || *set.Type == "SOA") { + if strings.TrimSuffix(*set.Name, ".") == strings.TrimSuffix(hostedZoneName, ".") && (*set.Type == "NS" || *set.Type == "SOA") { // Zone NS & SOA records cannot be deleted continue } diff --git a/builtin/providers/aws/resource_aws_route53_zone_test.go b/builtin/providers/aws/resource_aws_route53_zone_test.go index 6679ea72d..ee1b5d6d6 100644 --- a/builtin/providers/aws/resource_aws_route53_zone_test.go +++ b/builtin/providers/aws/resource_aws_route53_zone_test.go @@ -89,7 +89,7 @@ func TestAccAWSRoute53Zone_basic(t *testing.T) { } func TestAccAWSRoute53Zone_forceDestroy(t *testing.T) { - var zone route53.GetHostedZoneOutput + var zone, zoneWithDot route53.GetHostedZoneOutput // record the initialized providers so that we can use them to // check for the instances in each region @@ -115,6 +115,11 @@ func TestAccAWSRoute53Zone_forceDestroy(t *testing.T) { // Add >100 records to verify pagination works ok testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zone, 100), testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zone, 5), + + testAccCheckRoute53ZoneExistsWithProviders("aws_route53_zone.with_trailing_dot", &zoneWithDot, &providers), + // Add >100 records to verify pagination works ok + testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zoneWithDot, 100), + testAccCreateRandomRoute53RecordsInZoneIdWithProviders(&providers, &zoneWithDot, 5), ), }, }, @@ -417,6 +422,11 @@ resource "aws_route53_zone" "destroyable" { name = "terraform.io" force_destroy = true } + +resource "aws_route53_zone" "with_trailing_dot" { + name = "hashicorptest.io." + force_destroy = true +} ` const testAccRoute53ZoneConfigUpdateComment = ` diff --git a/builtin/providers/aws/resource_aws_security_group_test.go b/builtin/providers/aws/resource_aws_security_group_test.go index f1fe67ca9..f5a4f8d16 100644 --- a/builtin/providers/aws/resource_aws_security_group_test.go +++ b/builtin/providers/aws/resource_aws_security_group_test.go @@ -1995,6 +1995,91 @@ resource "aws_security_group_rule" "allow_test_group_3" { } ` +const testAccAWSSecurityGroupConfig_importIPRangeAndSecurityGroupWithSameRules = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + + tags { + Name = "tf_sg_import_test" + } +} + +resource "aws_security_group" "test_group_1" { + name = "test group 1" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_security_group" "test_group_2" { + name = "test group 2" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_security_group_rule" "allow_security_group" { + type = "ingress" + from_port = 0 + to_port = 0 + protocol = "tcp" + + source_security_group_id = "${aws_security_group.test_group_2.id}" + security_group_id = "${aws_security_group.test_group_1.id}" +} + +resource "aws_security_group_rule" "allow_cidr_block" { + type = "ingress" + from_port = 0 + to_port = 0 + protocol = "tcp" + + cidr_blocks = ["10.0.0.0/32"] + security_group_id = "${aws_security_group.test_group_1.id}" +} + +resource "aws_security_group_rule" "allow_ipv6_cidr_block" { + type = "ingress" + from_port = 0 + to_port = 0 + protocol = "tcp" + + ipv6_cidr_blocks = ["::/0"] + security_group_id = "${aws_security_group.test_group_1.id}" +} +` + +const testAccAWSSecurityGroupConfig_importIPRangesWithSameRules = ` +resource "aws_vpc" "foo" { + cidr_block = "10.1.0.0/16" + + tags { + Name = "tf_sg_import_test" + } +} + +resource "aws_security_group" "test_group_1" { + name = "test group 1" + vpc_id = "${aws_vpc.foo.id}" +} + +resource "aws_security_group_rule" "allow_cidr_block" { + type = "ingress" + from_port = 0 + to_port = 0 + protocol = "tcp" + + cidr_blocks = ["10.0.0.0/32"] + security_group_id = "${aws_security_group.test_group_1.id}" +} + +resource "aws_security_group_rule" "allow_ipv6_cidr_block" { + type = "ingress" + from_port = 0 + to_port = 0 + protocol = "tcp" + + ipv6_cidr_blocks = ["::/0"] + security_group_id = "${aws_security_group.test_group_1.id}" +} +` + const testAccAWSSecurityGroupConfigPrefixListEgress = ` resource "aws_vpc" "tf_sg_prefix_list_egress_test" { cidr_block = "10.0.0.0/16" diff --git a/builtin/providers/aws/resource_aws_subnet.go b/builtin/providers/aws/resource_aws_subnet.go index 87dc01e71..3543b9832 100644 --- a/builtin/providers/aws/resource_aws_subnet.go +++ b/builtin/providers/aws/resource_aws_subnet.go @@ -22,6 +22,9 @@ func resourceAwsSubnet() *schema.Resource { State: schema.ImportStatePassthrough, }, + SchemaVersion: 1, + MigrateState: resourceAwsSubnetMigrateState, + Schema: map[string]*schema.Schema{ "vpc_id": { Type: schema.TypeString, @@ -38,7 +41,6 @@ func resourceAwsSubnet() *schema.Resource { "ipv6_cidr_block": { Type: schema.TypeString, Optional: true, - ForceNew: true, }, "availability_zone": { @@ -141,9 +143,15 @@ func resourceAwsSubnetRead(d *schema.ResourceData, meta interface{}) error { d.Set("cidr_block", subnet.CidrBlock) d.Set("map_public_ip_on_launch", subnet.MapPublicIpOnLaunch) d.Set("assign_ipv6_address_on_creation", subnet.AssignIpv6AddressOnCreation) - if subnet.Ipv6CidrBlockAssociationSet != nil { - d.Set("ipv6_cidr_block", subnet.Ipv6CidrBlockAssociationSet[0].Ipv6CidrBlock) - d.Set("ipv6_cidr_block_association_id", subnet.Ipv6CidrBlockAssociationSet[0].AssociationId) + for _, a := range subnet.Ipv6CidrBlockAssociationSet { + if *a.Ipv6CidrBlockState.State == "associated" { //we can only ever have 1 IPv6 block associated at once + d.Set("ipv6_cidr_block_association_id", a.AssociationId) + d.Set("ipv6_cidr_block", a.Ipv6CidrBlock) + break + } else { + d.Set("ipv6_cidr_block_association_id", "") // we blank these out to remove old entries + d.Set("ipv6_cidr_block", "") + } } d.Set("tags", tagsToMap(subnet.Tags)) @@ -199,6 +207,73 @@ func resourceAwsSubnetUpdate(d *schema.ResourceData, meta interface{}) error { } } + // We have to be careful here to not go through a change of association if this is a new resource + // A New resource here would denote that the Update func is called by the Create func + if d.HasChange("ipv6_cidr_block") && !d.IsNewResource() { + // We need to handle that we disassociate the IPv6 CIDR block before we try and associate the new one + // This could be an issue as, we could error out when we try and add the new one + // We may need to roll back the state and reattach the old one if this is the case + + _, new := d.GetChange("ipv6_cidr_block") + + //Firstly we have to disassociate the old IPv6 CIDR Block + disassociateOps := &ec2.DisassociateSubnetCidrBlockInput{ + AssociationId: aws.String(d.Get("ipv6_cidr_block_association_id").(string)), + } + + _, err := conn.DisassociateSubnetCidrBlock(disassociateOps) + if err != nil { + return err + } + + // Wait for the CIDR to become disassociated + log.Printf( + "[DEBUG] Waiting for IPv6 CIDR (%s) to become disassociated", + d.Id()) + stateConf := &resource.StateChangeConf{ + Pending: []string{"disassociating", "associated"}, + Target: []string{"disassociated"}, + Refresh: SubnetIpv6CidrStateRefreshFunc(conn, d.Id(), d.Get("ipv6_cidr_block_association_id").(string)), + Timeout: 1 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for IPv6 CIDR (%s) to become disassociated: %s", + d.Id(), err) + } + + //Now we need to try and associate the new CIDR block + associatesOpts := &ec2.AssociateSubnetCidrBlockInput{ + SubnetId: aws.String(d.Id()), + Ipv6CidrBlock: aws.String(new.(string)), + } + + resp, err := conn.AssociateSubnetCidrBlock(associatesOpts) + if err != nil { + //The big question here is, do we want to try and reassociate the old one?? + //If we have a failure here, then we may be in a situation that we have nothing associated + return err + } + + // Wait for the CIDR to become associated + log.Printf( + "[DEBUG] Waiting for IPv6 CIDR (%s) to become associated", + d.Id()) + stateConf = &resource.StateChangeConf{ + Pending: []string{"associating", "disassociated"}, + Target: []string{"associated"}, + Refresh: SubnetIpv6CidrStateRefreshFunc(conn, d.Id(), *resp.Ipv6CidrBlockAssociation.AssociationId), + Timeout: 1 * time.Minute, + } + if _, err := stateConf.WaitForState(); err != nil { + return fmt.Errorf( + "Error waiting for IPv6 CIDR (%s) to become associated: %s", + d.Id(), err) + } + + d.SetPartial("ipv6_cidr_block") + } + d.Partial(false) return resourceAwsSubnetRead(d, meta) @@ -271,3 +346,38 @@ func SubnetStateRefreshFunc(conn *ec2.EC2, id string) resource.StateRefreshFunc return subnet, *subnet.State, nil } } + +func SubnetIpv6CidrStateRefreshFunc(conn *ec2.EC2, id string, associationId string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + opts := &ec2.DescribeSubnetsInput{ + SubnetIds: []*string{aws.String(id)}, + } + resp, err := conn.DescribeSubnets(opts) + if err != nil { + if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidSubnetID.NotFound" { + resp = nil + } else { + log.Printf("Error on SubnetIpv6CidrStateRefreshFunc: %s", err) + return nil, "", err + } + } + + if resp == nil { + // Sometimes AWS just has consistency issues and doesn't see + // our instance yet. Return an empty state. + return nil, "", nil + } + + if resp.Subnets[0].Ipv6CidrBlockAssociationSet == nil { + return nil, "", nil + } + + for _, association := range resp.Subnets[0].Ipv6CidrBlockAssociationSet { + if *association.AssociationId == associationId { + return association, *association.Ipv6CidrBlockState.State, nil + } + } + + return nil, "", nil + } +} diff --git a/builtin/providers/aws/resource_aws_subnet_migrate.go b/builtin/providers/aws/resource_aws_subnet_migrate.go new file mode 100644 index 000000000..0e0f19cf6 --- /dev/null +++ b/builtin/providers/aws/resource_aws_subnet_migrate.go @@ -0,0 +1,33 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/terraform" +) + +func resourceAwsSubnetMigrateState( + v int, is *terraform.InstanceState, meta interface{}) (*terraform.InstanceState, error) { + switch v { + case 0: + log.Println("[INFO] Found AWS Subnet State v0; migrating to v1") + return migrateSubnetStateV0toV1(is) + default: + return is, fmt.Errorf("Unexpected schema version: %d", v) + } +} + +func migrateSubnetStateV0toV1(is *terraform.InstanceState) (*terraform.InstanceState, error) { + if is.Empty() || is.Attributes == nil { + log.Println("[DEBUG] Empty Subnet State; nothing to migrate.") + return is, nil + } + + log.Printf("[DEBUG] Attributes before migration: %#v", is.Attributes) + + is.Attributes["assign_ipv6_address_on_creation"] = "false" + + log.Printf("[DEBUG] Attributes after migration: %#v", is.Attributes) + return is, nil +} diff --git a/builtin/providers/aws/resource_aws_subnet_migrate_test.go b/builtin/providers/aws/resource_aws_subnet_migrate_test.go new file mode 100644 index 000000000..c3bdae859 --- /dev/null +++ b/builtin/providers/aws/resource_aws_subnet_migrate_test.go @@ -0,0 +1,41 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform/terraform" +) + +func TestAWSSubnetMigrateState(t *testing.T) { + cases := map[string]struct { + StateVersion int + ID string + Attributes map[string]string + Expected string + Meta interface{} + }{ + "v0_1_without_value": { + StateVersion: 0, + ID: "some_id", + Attributes: map[string]string{}, + Expected: "false", + }, + } + + for tn, tc := range cases { + is := &terraform.InstanceState{ + ID: tc.ID, + Attributes: tc.Attributes, + } + is, err := resourceAwsSubnetMigrateState( + tc.StateVersion, is, tc.Meta) + + if err != nil { + t.Fatalf("bad: %s, err: %#v", tn, err) + } + + if is.Attributes["assign_ipv6_address_on_creation"] != tc.Expected { + t.Fatalf("bad Subnet Migrate: %s\n\n expected: %s", is.Attributes["assign_ipv6_address_on_creation"], tc.Expected) + } + } +} diff --git a/builtin/providers/aws/resource_aws_subnet_test.go b/builtin/providers/aws/resource_aws_subnet_test.go index b86284fdb..4a1c55739 100644 --- a/builtin/providers/aws/resource_aws_subnet_test.go +++ b/builtin/providers/aws/resource_aws_subnet_test.go @@ -45,27 +45,7 @@ func TestAccAWSSubnet_basic(t *testing.T) { } func TestAccAWSSubnet_ipv6(t *testing.T) { - var v ec2.Subnet - - testCheck := func(*terraform.State) error { - if v.Ipv6CidrBlockAssociationSet == nil { - return fmt.Errorf("Expected IPV6 CIDR Block Association") - } - - if *v.AssignIpv6AddressOnCreation != true { - return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *v.AssignIpv6AddressOnCreation) - } - - return nil - } - - testCheckUpdated := func(*terraform.State) error { - if *v.AssignIpv6AddressOnCreation != false { - return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *v.AssignIpv6AddressOnCreation) - } - - return nil - } + var before, after ec2.Subnet resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -77,22 +57,65 @@ func TestAccAWSSubnet_ipv6(t *testing.T) { Config: testAccSubnetConfigIpv6, Check: resource.ComposeTestCheckFunc( testAccCheckSubnetExists( - "aws_subnet.foo", &v), - testCheck, + "aws_subnet.foo", &before), + testAccCheckAwsSubnetIpv6BeforeUpdate(t, &before), ), }, { - Config: testAccSubnetConfigIpv6Updated, + Config: testAccSubnetConfigIpv6UpdateAssignIpv6OnCreation, Check: resource.ComposeTestCheckFunc( testAccCheckSubnetExists( - "aws_subnet.foo", &v), - testCheckUpdated, + "aws_subnet.foo", &after), + testAccCheckAwsSubnetIpv6AfterUpdate(t, &after), + ), + }, + { + Config: testAccSubnetConfigIpv6UpdateIpv6Cidr, + Check: resource.ComposeTestCheckFunc( + testAccCheckSubnetExists( + "aws_subnet.foo", &after), + + testAccCheckAwsSubnetNotRecreated(t, &before, &after), ), }, }, }) } +func testAccCheckAwsSubnetIpv6BeforeUpdate(t *testing.T, subnet *ec2.Subnet) resource.TestCheckFunc { + return func(s *terraform.State) error { + if subnet.Ipv6CidrBlockAssociationSet == nil { + return fmt.Errorf("Expected IPV6 CIDR Block Association") + } + + if *subnet.AssignIpv6AddressOnCreation != true { + return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *subnet.AssignIpv6AddressOnCreation) + } + + return nil + } +} + +func testAccCheckAwsSubnetIpv6AfterUpdate(t *testing.T, subnet *ec2.Subnet) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *subnet.AssignIpv6AddressOnCreation != false { + return fmt.Errorf("bad AssignIpv6AddressOnCreation: %t", *subnet.AssignIpv6AddressOnCreation) + } + + return nil + } +} + +func testAccCheckAwsSubnetNotRecreated(t *testing.T, + before, after *ec2.Subnet) resource.TestCheckFunc { + return func(s *terraform.State) error { + if *before.SubnetId != *after.SubnetId { + t.Fatalf("Expected SubnetIDs not to change, but both got before: %s and after: %s", *before.SubnetId, *after.SubnetId) + } + return nil + } +} + func testAccCheckSubnetDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).ec2conn @@ -187,7 +210,25 @@ resource "aws_subnet" "foo" { } ` -const testAccSubnetConfigIpv6Updated = ` +const testAccSubnetConfigIpv6UpdateAssignIpv6OnCreation = ` +resource "aws_vpc" "foo" { + cidr_block = "10.10.0.0/16" + assign_generated_ipv6_cidr_block = true +} + +resource "aws_subnet" "foo" { + cidr_block = "10.10.1.0/24" + vpc_id = "${aws_vpc.foo.id}" + ipv6_cidr_block = "${cidrsubnet(aws_vpc.foo.ipv6_cidr_block, 8, 1)}" + map_public_ip_on_launch = true + assign_ipv6_address_on_creation = false + tags { + Name = "tf-subnet-acc-test" + } +} +` + +const testAccSubnetConfigIpv6UpdateIpv6Cidr = ` resource "aws_vpc" "foo" { cidr_block = "10.10.0.0/16" assign_generated_ipv6_cidr_block = true diff --git a/builtin/providers/aws/resource_aws_waf_ipset.go b/builtin/providers/aws/resource_aws_waf_ipset.go index 426508db4..40ef54ff3 100644 --- a/builtin/providers/aws/resource_aws_waf_ipset.go +++ b/builtin/providers/aws/resource_aws_waf_ipset.go @@ -80,17 +80,17 @@ func resourceAwsWafIPSetRead(d *schema.ResourceData, meta interface{}) error { return err } - var IPSetDescriptors []map[string]interface{} + var descriptors []map[string]interface{} - for _, IPSetDescriptor := range resp.IPSet.IPSetDescriptors { - IPSet := map[string]interface{}{ - "type": *IPSetDescriptor.Type, - "value": *IPSetDescriptor.Value, + for _, descriptor := range resp.IPSet.IPSetDescriptors { + d := map[string]interface{}{ + "type": *descriptor.Type, + "value": *descriptor.Value, } - IPSetDescriptors = append(IPSetDescriptors, IPSet) + descriptors = append(descriptors, d) } - d.Set("ip_set_descriptors", IPSetDescriptors) + d.Set("ip_set_descriptors", descriptors) d.Set("name", resp.IPSet.Name) @@ -98,22 +98,36 @@ func resourceAwsWafIPSetRead(d *schema.ResourceData, meta interface{}) error { } func resourceAwsWafIPSetUpdate(d *schema.ResourceData, meta interface{}) error { - err := updateIPSetResource(d, meta, waf.ChangeActionInsert) - if err != nil { - return fmt.Errorf("Error Updating WAF IPSet: %s", err) + conn := meta.(*AWSClient).wafconn + + if d.HasChange("ip_set_descriptors") { + o, n := d.GetChange("ip_set_descriptors") + oldD, newD := o.(*schema.Set).List(), n.(*schema.Set).List() + + err := updateWafIpSetDescriptors(d.Id(), oldD, newD, conn) + if err != nil { + return fmt.Errorf("Error Updating WAF IPSet: %s", err) + } } + return resourceAwsWafIPSetRead(d, meta) } func resourceAwsWafIPSetDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).wafconn - err := updateIPSetResource(d, meta, waf.ChangeActionDelete) - if err != nil { - return fmt.Errorf("Error Removing IPSetDescriptors: %s", err) + + oldDescriptors := d.Get("ip_set_descriptors").(*schema.Set).List() + + if len(oldDescriptors) > 0 { + noDescriptors := []interface{}{} + err := updateWafIpSetDescriptors(d.Id(), oldDescriptors, noDescriptors, conn) + if err != nil { + return fmt.Errorf("Error updating IPSetDescriptors: %s", err) + } } wr := newWafRetryer(conn, "global") - _, err = wr.RetryWithToken(func(token *string) (interface{}, error) { + _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.DeleteIPSetInput{ ChangeToken: token, IPSetId: aws.String(d.Id()), @@ -128,29 +142,15 @@ func resourceAwsWafIPSetDelete(d *schema.ResourceData, meta interface{}) error { return nil } -func updateIPSetResource(d *schema.ResourceData, meta interface{}, ChangeAction string) error { - conn := meta.(*AWSClient).wafconn - +func updateWafIpSetDescriptors(id string, oldD, newD []interface{}, conn *waf.WAF) error { wr := newWafRetryer(conn, "global") _, err := wr.RetryWithToken(func(token *string) (interface{}, error) { req := &waf.UpdateIPSetInput{ ChangeToken: token, - IPSetId: aws.String(d.Id()), + IPSetId: aws.String(id), + Updates: diffWafIpSetDescriptors(oldD, newD), } - - IPSetDescriptors := d.Get("ip_set_descriptors").(*schema.Set) - for _, IPSetDescriptor := range IPSetDescriptors.List() { - IPSet := IPSetDescriptor.(map[string]interface{}) - IPSetUpdate := &waf.IPSetUpdate{ - Action: aws.String(ChangeAction), - IPSetDescriptor: &waf.IPSetDescriptor{ - Type: aws.String(IPSet["type"].(string)), - Value: aws.String(IPSet["value"].(string)), - }, - } - req.Updates = append(req.Updates, IPSetUpdate) - } - + log.Printf("[INFO] Updating IPSet descriptors: %s", req) return conn.UpdateIPSet(req) }) if err != nil { @@ -159,3 +159,37 @@ func updateIPSetResource(d *schema.ResourceData, meta interface{}, ChangeAction return nil } + +func diffWafIpSetDescriptors(oldD, newD []interface{}) []*waf.IPSetUpdate { + updates := make([]*waf.IPSetUpdate, 0) + + for _, od := range oldD { + descriptor := od.(map[string]interface{}) + + if idx, contains := sliceContainsMap(newD, descriptor); contains { + newD = append(newD[:idx], newD[idx+1:]...) + continue + } + + updates = append(updates, &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String(descriptor["type"].(string)), + Value: aws.String(descriptor["value"].(string)), + }, + }) + } + + for _, nd := range newD { + descriptor := nd.(map[string]interface{}) + + updates = append(updates, &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String(descriptor["type"].(string)), + Value: aws.String(descriptor["value"].(string)), + }, + }) + } + return updates +} diff --git a/builtin/providers/aws/resource_aws_waf_ipset_test.go b/builtin/providers/aws/resource_aws_waf_ipset_test.go index 3db32dc44..ee7593116 100644 --- a/builtin/providers/aws/resource_aws_waf_ipset_test.go +++ b/builtin/providers/aws/resource_aws_waf_ipset_test.go @@ -2,6 +2,7 @@ package aws import ( "fmt" + "reflect" "testing" "github.com/hashicorp/terraform/helper/resource" @@ -96,6 +97,169 @@ func TestAccAWSWafIPSet_changeNameForceNew(t *testing.T) { }) } +func TestAccAWSWafIPSet_changeDescriptors(t *testing.T) { + var before, after waf.IPSet + ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafIPSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafIPSetConfig(ipsetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &before), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "name", ipsetName), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.#", "1"), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.4037960608.type", "IPV4"), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.4037960608.value", "192.0.7.0/24"), + ), + }, + { + Config: testAccAWSWafIPSetConfigChangeIPSetDescriptors(ipsetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &after), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "name", ipsetName), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.#", "1"), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.115741513.type", "IPV4"), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.115741513.value", "192.0.8.0/24"), + ), + }, + }, + }) +} + +func TestAccAWSWafIPSet_noDescriptors(t *testing.T) { + var ipset waf.IPSet + ipsetName := fmt.Sprintf("ip-set-%s", acctest.RandString(5)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSWafIPSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSWafIPSetConfig_noDescriptors(ipsetName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSWafIPSetExists("aws_waf_ipset.ipset", &ipset), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "name", ipsetName), + resource.TestCheckResourceAttr( + "aws_waf_ipset.ipset", "ip_set_descriptors.#", "0"), + ), + }, + }, + }) +} + +func TestDiffWafIpSetDescriptors(t *testing.T) { + testCases := []struct { + Old []interface{} + New []interface{} + ExpectedUpdates []*waf.IPSetUpdate + }{ + { + // Change + Old: []interface{}{ + map[string]interface{}{"type": "IPV4", "value": "192.0.7.0/24"}, + }, + New: []interface{}{ + map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"}, + }, + ExpectedUpdates: []*waf.IPSetUpdate{ + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.7.0/24"), + }, + }, + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.8.0/24"), + }, + }, + }, + }, + { + // Fresh IPSet + Old: []interface{}{}, + New: []interface{}{ + map[string]interface{}{"type": "IPV4", "value": "10.0.1.0/24"}, + map[string]interface{}{"type": "IPV4", "value": "10.0.2.0/24"}, + map[string]interface{}{"type": "IPV4", "value": "10.0.3.0/24"}, + }, + ExpectedUpdates: []*waf.IPSetUpdate{ + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.1.0/24"), + }, + }, + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.2.0/24"), + }, + }, + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionInsert), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("10.0.3.0/24"), + }, + }, + }, + }, + { + // Deletion + Old: []interface{}{ + map[string]interface{}{"type": "IPV4", "value": "192.0.7.0/24"}, + map[string]interface{}{"type": "IPV4", "value": "192.0.8.0/24"}, + }, + New: []interface{}{}, + ExpectedUpdates: []*waf.IPSetUpdate{ + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.7.0/24"), + }, + }, + &waf.IPSetUpdate{ + Action: aws.String(waf.ChangeActionDelete), + IPSetDescriptor: &waf.IPSetDescriptor{ + Type: aws.String("IPV4"), + Value: aws.String("192.0.8.0/24"), + }, + }, + }, + }, + } + for i, tc := range testCases { + t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { + updates := diffWafIpSetDescriptors(tc.Old, tc.New) + if !reflect.DeepEqual(updates, tc.ExpectedUpdates) { + t.Fatalf("IPSet updates don't match.\nGiven: %s\nExpected: %s", + updates, tc.ExpectedUpdates) + } + }) + } +} + func testAccCheckAWSWafIPSetDisappears(v *waf.IPSet) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).wafconn @@ -228,3 +392,9 @@ func testAccAWSWafIPSetConfigChangeIPSetDescriptors(name string) string { } }`, name) } + +func testAccAWSWafIPSetConfig_noDescriptors(name string) string { + return fmt.Sprintf(`resource "aws_waf_ipset" "ipset" { + name = "%s" +}`, name) +} diff --git a/builtin/providers/aws/resource_aws_waf_rule.go b/builtin/providers/aws/resource_aws_waf_rule.go index f750f6ea0..543299879 100644 --- a/builtin/providers/aws/resource_aws_waf_rule.go +++ b/builtin/providers/aws/resource_aws_waf_rule.go @@ -24,9 +24,10 @@ func resourceAwsWafRule() *schema.Resource { ForceNew: true, }, "metric_name": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateWafMetricName, }, "predicates": &schema.Schema{ Type: schema.TypeSet, diff --git a/builtin/providers/aws/resource_aws_waf_web_acl.go b/builtin/providers/aws/resource_aws_waf_web_acl.go index a45b1cc0e..7e3ac7237 100644 --- a/builtin/providers/aws/resource_aws_waf_web_acl.go +++ b/builtin/providers/aws/resource_aws_waf_web_acl.go @@ -37,9 +37,10 @@ func resourceAwsWafWebAcl() *schema.Resource { }, }, "metric_name": &schema.Schema{ - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validateWafMetricName, }, "rules": &schema.Schema{ Type: schema.TypeSet, diff --git a/builtin/providers/aws/structure.go b/builtin/providers/aws/structure.go index dfe053b9a..34041b90c 100644 --- a/builtin/providers/aws/structure.go +++ b/builtin/providers/aws/structure.go @@ -14,6 +14,7 @@ import ( "github.com/aws/aws-sdk-go/service/autoscaling" "github.com/aws/aws-sdk-go/service/cloudformation" "github.com/aws/aws-sdk-go/service/cloudwatchlogs" + "github.com/aws/aws-sdk-go/service/cognitoidentity" "github.com/aws/aws-sdk-go/service/configservice" "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/ec2" @@ -1925,3 +1926,104 @@ func flattenApiGatewayUsagePlanQuota(s *apigateway.QuotaSettings) []map[string]i return []map[string]interface{}{settings} } + +func buildApiGatewayInvokeURL(restApiId, region, stageName string) string { + return fmt.Sprintf("https://%s.execute-api.%s.amazonaws.com/%s", + restApiId, region, stageName) +} + +func buildApiGatewayExecutionARN(restApiId, region, accountId string) (string, error) { + if accountId == "" { + return "", fmt.Errorf("Unable to build execution ARN for %s as account ID is missing", + restApiId) + } + return fmt.Sprintf("arn:aws:execute-api:%s:%s:%s", + region, accountId, restApiId), nil +} + +func expandCognitoSupportedLoginProviders(config map[string]interface{}) map[string]*string { + m := map[string]*string{} + for k, v := range config { + s := v.(string) + m[k] = &s + } + return m +} + +func flattenCognitoSupportedLoginProviders(config map[string]*string) map[string]string { + m := map[string]string{} + for k, v := range config { + m[k] = *v + } + return m +} + +func expandCognitoIdentityProviders(s *schema.Set) []*cognitoidentity.Provider { + ips := make([]*cognitoidentity.Provider, 0) + + for _, v := range s.List() { + s := v.(map[string]interface{}) + + ip := &cognitoidentity.Provider{} + + if sv, ok := s["client_id"].(string); ok { + ip.ClientId = aws.String(sv) + } + + if sv, ok := s["provider_name"].(string); ok { + ip.ProviderName = aws.String(sv) + } + + if sv, ok := s["server_side_token_check"].(bool); ok { + ip.ServerSideTokenCheck = aws.Bool(sv) + } + + ips = append(ips, ip) + } + + return ips +} + +func flattenCognitoIdentityProviders(ips []*cognitoidentity.Provider) []map[string]interface{} { + values := make([]map[string]interface{}, 0) + + for _, v := range ips { + ip := make(map[string]interface{}) + + if v == nil { + return nil + } + + if v.ClientId != nil { + ip["client_id"] = *v.ClientId + } + + if v.ProviderName != nil { + ip["provider_name"] = *v.ProviderName + } + + if v.ServerSideTokenCheck != nil { + ip["server_side_token_check"] = *v.ServerSideTokenCheck + } + + values = append(values, ip) + } + + return values +} + +func buildLambdaInvokeArn(lambdaArn, region string) string { + apiVersion := "2015-03-31" + return fmt.Sprintf("arn:aws:apigateway:%s:lambda:path/%s/functions/%s/invocations", + region, apiVersion, lambdaArn) +} + +func sliceContainsMap(l []interface{}, m map[string]interface{}) (int, bool) { + for i, t := range l { + if reflect.DeepEqual(m, t.(map[string]interface{})) { + return i, true + } + } + + return -1, false +} diff --git a/builtin/providers/aws/tagsGeneric.go b/builtin/providers/aws/tagsGeneric.go new file mode 100644 index 000000000..08bba6756 --- /dev/null +++ b/builtin/providers/aws/tagsGeneric.go @@ -0,0 +1,69 @@ +package aws + +import ( + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" +) + +// diffTags takes our tags locally and the ones remotely and returns +// the set of tags that must be created, and the set of tags that must +// be destroyed. +func diffTagsGeneric(oldTags, newTags map[string]interface{}) (map[string]*string, map[string]*string) { + // First, we're creating everything we have + create := make(map[string]*string) + for k, v := range newTags { + create[k] = aws.String(v.(string)) + } + + // Build the map of what to remove + remove := make(map[string]*string) + for k, v := range oldTags { + old, ok := create[k] + if !ok || old != aws.String(v.(string)) { + // Delete it! + remove[k] = aws.String(v.(string)) + } + } + + return create, remove +} + +// tagsFromMap returns the tags for the given map of data. +func tagsFromMapGeneric(m map[string]interface{}) map[string]*string { + result := make(map[string]*string) + for k, v := range m { + if !tagIgnoredGeneric(k) { + result[k] = aws.String(v.(string)) + } + } + + return result +} + +// tagsToMap turns the tags into a map. +func tagsToMapGeneric(ts map[string]*string) map[string]string { + result := make(map[string]string) + for k, v := range ts { + if !tagIgnoredGeneric(k) { + result[k] = aws.StringValue(v) + } + } + + return result +} + +// compare a tag against a list of strings and checks if it should +// be ignored or not +func tagIgnoredGeneric(k string) bool { + filter := []string{"^aws:*"} + for _, v := range filter { + log.Printf("[DEBUG] Matching %v with %v\n", v, k) + if r, _ := regexp.MatchString(v, k); r == true { + log.Printf("[DEBUG] Found AWS specific tag %s, ignoring.\n", k) + return true + } + } + return false +} diff --git a/builtin/providers/aws/tagsGeneric_test.go b/builtin/providers/aws/tagsGeneric_test.go new file mode 100644 index 000000000..2477f3aa5 --- /dev/null +++ b/builtin/providers/aws/tagsGeneric_test.go @@ -0,0 +1,73 @@ +package aws + +import ( + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" +) + +// go test -v -run="TestDiffGenericTags" +func TestDiffGenericTags(t *testing.T) { + cases := []struct { + Old, New map[string]interface{} + Create, Remove map[string]string + }{ + // Basic add/remove + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "bar": "baz", + }, + Create: map[string]string{ + "bar": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + + // Modify + { + Old: map[string]interface{}{ + "foo": "bar", + }, + New: map[string]interface{}{ + "foo": "baz", + }, + Create: map[string]string{ + "foo": "baz", + }, + Remove: map[string]string{ + "foo": "bar", + }, + }, + } + + for i, tc := range cases { + c, r := diffTagsGeneric(tc.Old, tc.New) + cm := tagsToMapGeneric(c) + rm := tagsToMapGeneric(r) + if !reflect.DeepEqual(cm, tc.Create) { + t.Fatalf("%d: bad create: %#v", i, cm) + } + if !reflect.DeepEqual(rm, tc.Remove) { + t.Fatalf("%d: bad remove: %#v", i, rm) + } + } +} + +// go test -v -run="TestIgnoringTagsGeneric" +func TestIgnoringTagsGeneric(t *testing.T) { + ignoredTags := map[string]*string{ + "aws:cloudformation:logical-id": aws.String("foo"), + "aws:foo:bar": aws.String("baz"), + } + for k, v := range ignoredTags { + if !tagIgnoredGeneric(k) { + t.Fatalf("Tag %v with value %v not ignored, but should be!", k, *v) + } + } +} diff --git a/builtin/providers/aws/tagsLambda.go b/builtin/providers/aws/tagsLambda.go new file mode 100644 index 000000000..28aa25121 --- /dev/null +++ b/builtin/providers/aws/tagsLambda.go @@ -0,0 +1,50 @@ +package aws + +import ( + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/lambda" + "github.com/hashicorp/terraform/helper/schema" +) + +// setTags is a helper to set the tags for a resource. It expects the +// tags field to be named "tags" +func setTagsLambda(conn *lambda.Lambda, d *schema.ResourceData, arn string) error { + if d.HasChange("tags") { + oraw, nraw := d.GetChange("tags") + o := oraw.(map[string]interface{}) + n := nraw.(map[string]interface{}) + create, remove := diffTagsGeneric(o, n) + + // Set tags + if len(remove) > 0 { + log.Printf("[DEBUG] Removing tags: %#v", remove) + keys := make([]*string, 0, len(remove)) + for k := range remove { + keys = append(keys, aws.String(k)) + } + + _, err := conn.UntagResource(&lambda.UntagResourceInput{ + Resource: aws.String(arn), + TagKeys: keys, + }) + if err != nil { + return err + } + } + if len(create) > 0 { + log.Printf("[DEBUG] Creating tags: %#v", create) + + _, err := conn.TagResource(&lambda.TagResourceInput{ + Resource: aws.String(arn), + Tags: create, + }) + if err != nil { + return err + } + } + } + + return nil +} diff --git a/builtin/providers/aws/validators.go b/builtin/providers/aws/validators.go index 9a7bf0e0a..a68253707 100644 --- a/builtin/providers/aws/validators.go +++ b/builtin/providers/aws/validators.go @@ -1218,3 +1218,86 @@ func validateAwsKmsName(v interface{}, k string) (ws []string, es []error) { } return } + +func validateCognitoIdentityPoolName(v interface{}, k string) (ws []string, errors []error) { + val := v.(string) + if !regexp.MustCompile("^[\\w _]+$").MatchString(val) { + errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters and spaces", k)) + } + + return +} + +func validateCognitoProviderDeveloperName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) > 100 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 100 caracters", k)) + } + + if !regexp.MustCompile("^[\\w._-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters, dots, underscores and hyphens", k)) + } + + return +} + +func validateCognitoSupportedLoginProviders(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 1 { + errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k)) + } + + if len(value) > 128 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 128 caracters", k)) + } + + if !regexp.MustCompile("^[\\w.;_/-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters, dots, semicolons, underscores, slashes and hyphens", k)) + } + + return +} + +func validateCognitoIdentityProvidersClientId(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 1 { + errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k)) + } + + if len(value) > 128 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 128 caracters", k)) + } + + if !regexp.MustCompile("^[\\w_]+$").MatchString(value) { + errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters and underscores", k)) + } + + return +} + +func validateCognitoIdentityProvidersProviderName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if len(value) < 1 { + errors = append(errors, fmt.Errorf("%q cannot be less than 1 character", k)) + } + + if len(value) > 128 { + errors = append(errors, fmt.Errorf("%q cannot be longer than 128 caracters", k)) + } + + if !regexp.MustCompile("^[\\w._:/-]+$").MatchString(value) { + errors = append(errors, fmt.Errorf("%q must contain only alphanumeric caracters, dots, underscores, colons, slashes and hyphens", k)) + } + + return +} + +func validateWafMetricName(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + if !regexp.MustCompile(`^[0-9A-Za-z]+$`).MatchString(value) { + errors = append(errors, fmt.Errorf( + "Only alphanumeric characters allowed in %q: %q", + k, value)) + } + return +} diff --git a/builtin/providers/aws/validators_test.go b/builtin/providers/aws/validators_test.go index 4638f0ba0..b344f206d 100644 --- a/builtin/providers/aws/validators_test.go +++ b/builtin/providers/aws/validators_test.go @@ -2011,5 +2011,201 @@ func TestValidateAwsKmsName(t *testing.T) { t.Fatalf("AWS KMS Alias Name validation failed: %v", errors) } } - +} + +func TestValidateCognitoIdentityPoolName(t *testing.T) { + validValues := []string{ + "123", + "1 2 3", + "foo", + "foo bar", + "foo_bar", + "1foo 2bar 3", + } + + for _, s := range validValues { + _, errors := validateCognitoIdentityPoolName(s, "identity_pool_name") + if len(errors) > 0 { + t.Fatalf("%q should be a valid Cognito Identity Pool Name: %v", s, errors) + } + } + + invalidValues := []string{ + "1-2-3", + "foo!", + "foo-bar", + "foo-bar", + "foo1-bar2", + } + + for _, s := range invalidValues { + _, errors := validateCognitoIdentityPoolName(s, "identity_pool_name") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid Cognito Identity Pool Name: %v", s, errors) + } + } +} + +func TestValidateCognitoProviderDeveloperName(t *testing.T) { + validValues := []string{ + "1", + "foo", + "1.2", + "foo1-bar2-baz3", + "foo_bar", + } + + for _, s := range validValues { + _, errors := validateCognitoProviderDeveloperName(s, "developer_provider_name") + if len(errors) > 0 { + t.Fatalf("%q should be a valid Cognito Provider Developer Name: %v", s, errors) + } + } + + invalidValues := []string{ + "foo!", + "foo:bar", + "foo/bar", + "foo;bar", + } + + for _, s := range invalidValues { + _, errors := validateCognitoProviderDeveloperName(s, "developer_provider_name") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid Cognito Provider Developer Name: %v", s, errors) + } + } +} + +func TestValidateCognitoSupportedLoginProviders(t *testing.T) { + validValues := []string{ + "foo", + "7346241598935552", + "123456789012.apps.googleusercontent.com", + "foo_bar", + "foo;bar", + "foo/bar", + "foo-bar", + "xvz1evFS4wEEPTGEFPHBog;kAcSOqF21Fu85e7zjz7ZN2U4ZRhfV3WpwPAoE3Z7kBw", + strings.Repeat("W", 128), + } + + for _, s := range validValues { + _, errors := validateCognitoSupportedLoginProviders(s, "supported_login_providers") + if len(errors) > 0 { + t.Fatalf("%q should be a valid Cognito Supported Login Providers: %v", s, errors) + } + } + + invalidValues := []string{ + "", + strings.Repeat("W", 129), // > 128 + "foo:bar_baz", + "foobar,foobaz", + "foobar=foobaz", + } + + for _, s := range invalidValues { + _, errors := validateCognitoSupportedLoginProviders(s, "supported_login_providers") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid Cognito Supported Login Providers: %v", s, errors) + } + } +} + +func TestValidateCognitoIdentityProvidersClientId(t *testing.T) { + validValues := []string{ + "7lhlkkfbfb4q5kpp90urffao", + "12345678", + "foo_123", + strings.Repeat("W", 128), + } + + for _, s := range validValues { + _, errors := validateCognitoIdentityProvidersClientId(s, "client_id") + if len(errors) > 0 { + t.Fatalf("%q should be a valid Cognito Identity Provider Client ID: %v", s, errors) + } + } + + invalidValues := []string{ + "", + strings.Repeat("W", 129), // > 128 + "foo-bar", + "foo:bar", + "foo;bar", + } + + for _, s := range invalidValues { + _, errors := validateCognitoIdentityProvidersClientId(s, "client_id") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid Cognito Identity Provider Client ID: %v", s, errors) + } + } +} + +func TestValidateCognitoIdentityProvidersProviderName(t *testing.T) { + validValues := []string{ + "foo", + "7346241598935552", + "foo_bar", + "foo:bar", + "foo/bar", + "foo-bar", + "cognito-idp.us-east-1.amazonaws.com/us-east-1_Zr231apJu", + strings.Repeat("W", 128), + } + + for _, s := range validValues { + _, errors := validateCognitoIdentityProvidersProviderName(s, "provider_name") + if len(errors) > 0 { + t.Fatalf("%q should be a valid Cognito Identity Provider Name: %v", s, errors) + } + } + + invalidValues := []string{ + "", + strings.Repeat("W", 129), // > 128 + "foo;bar_baz", + "foobar,foobaz", + "foobar=foobaz", + } + + for _, s := range invalidValues { + _, errors := validateCognitoIdentityProvidersProviderName(s, "provider_name") + if len(errors) == 0 { + t.Fatalf("%q should not be a valid Cognito Identity Provider Name: %v", s, errors) + } + } +} + +func TestValidateWafMetricName(t *testing.T) { + validNames := []string{ + "testrule", + "testRule", + "testRule123", + } + for _, v := range validNames { + _, errors := validateWafMetricName(v, "name") + if len(errors) != 0 { + t.Fatalf("%q should be a valid WAF metric name: %q", v, errors) + } + } + + invalidNames := []string{ + "!", + "/", + " ", + ":", + ";", + "white space", + "/slash-at-the-beginning", + "slash-at-the-end/", + } + for _, v := range invalidNames { + _, errors := validateWafMetricName(v, "name") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid WAF metric name", v) + } + } } diff --git a/builtin/providers/azurerm/resource_arm_redis_cache.go b/builtin/providers/azurerm/resource_arm_redis_cache.go index 57880af5a..9ff08a872 100644 --- a/builtin/providers/azurerm/resource_arm_redis_cache.go +++ b/builtin/providers/azurerm/resource_arm_redis_cache.go @@ -281,14 +281,17 @@ func resourceArmRedisCacheRead(d *schema.ResourceData, meta interface{}) error { name := id.Path["Redis"] resp, err := client.Get(resGroup, name) - if err != nil { - return fmt.Errorf("Error making Read request on Azure Redis Cache %s: %s", name, err) - } + + // covers if the resource has been deleted outside of TF, but is still in the state if resp.StatusCode == http.StatusNotFound { d.SetId("") return nil } + if err != nil { + return fmt.Errorf("Error making Read request on Azure Redis Cache %s: %s", name, err) + } + keysResp, err := client.ListKeys(resGroup, name) if err != nil { return fmt.Errorf("Error making ListKeys request on Azure Redis Cache %s: %s", name, err) diff --git a/builtin/providers/azurerm/resource_arm_virtual_machine.go b/builtin/providers/azurerm/resource_arm_virtual_machine.go index c3f736412..4a01ca374 100644 --- a/builtin/providers/azurerm/resource_arm_virtual_machine.go +++ b/builtin/providers/azurerm/resource_arm_virtual_machine.go @@ -177,8 +177,9 @@ func resourceArmVirtualMachine() *schema.Resource { }, "create_option": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: ignoreCaseDiffSuppressFunc, }, "disk_size_gb": { @@ -232,8 +233,9 @@ func resourceArmVirtualMachine() *schema.Resource { }, "create_option": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: ignoreCaseDiffSuppressFunc, }, "caching": { diff --git a/builtin/providers/circonus/resource_circonus_metric_cluster.go b/builtin/providers/circonus/resource_circonus_metric_cluster.go index f8776099b..77fde410a 100644 --- a/builtin/providers/circonus/resource_circonus_metric_cluster.go +++ b/builtin/providers/circonus/resource_circonus_metric_cluster.go @@ -6,7 +6,6 @@ import ( "strings" "github.com/circonus-labs/circonus-gometrics/api" - "github.com/circonus-labs/circonus-gometrics/api/config" "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/helper/hashcode" "github.com/hashicorp/terraform/helper/schema" @@ -85,9 +84,8 @@ func resourceMetricCluster() *schema.Resource { // Out parameters metricClusterIDAttr: &schema.Schema{ - Computed: true, - Type: schema.TypeString, - ValidateFunc: validateRegexp(metricClusterIDAttr, config.MetricClusterCIDRegex), + Computed: true, + Type: schema.TypeString, }, }), } diff --git a/builtin/providers/consul/data_source_consul_agent_self.go b/builtin/providers/consul/data_source_consul_agent_self.go index c49800bc7..17beaa626 100644 --- a/builtin/providers/consul/data_source_consul_agent_self.go +++ b/builtin/providers/consul/data_source_consul_agent_self.go @@ -181,9 +181,6 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfACLDisabledTTL: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfACLDisabledTTL, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfACLDownPolicy: { Computed: true, @@ -196,9 +193,6 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfACLTTL: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfACLTTL, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfAddresses: { Computed: true, @@ -275,23 +269,14 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfCheckDeregisterIntervalMin: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfCheckDeregisterIntervalMin, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfCheckReapInterval: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfCheckReapInterval, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfCheckUpdateInterval: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfCheckUpdateInterval, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfClientAddr: { Computed: true, @@ -317,16 +302,10 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfDNSMaxStale: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfDNSMaxStale, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfDNSNodeTTL: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfDNSNodeTTL, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfDNSOnlyPassing: { Computed: true, @@ -335,16 +314,10 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfDNSRecursorTimeout: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfDNSRecursorTimeout, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfDNSServiceTTL: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfDNSServiceTTL, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfDNSUDPAnswerLimit: { Computed: true, @@ -406,9 +379,6 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfID: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfID, validatorInputs{ - validateRegexp(`(?i)^[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12}$`), - }), }, agentSelfLeaveOnInt: { Computed: true, @@ -434,9 +404,6 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfPerformanceRaftMultiplier: { Computed: true, Type: schema.TypeString, // FIXME(sean@): should be schema.TypeInt - ValidateFunc: makeValidationFunc(agentSelfPerformanceRaftMultiplier, validatorInputs{ - validateIntMin(0), - }), }, }, }, @@ -453,58 +420,30 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfSchemaPortsDNS: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsDNS, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, agentSelfSchemaPortsHTTP: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsHTTP, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, agentSelfSchemaPortsHTTPS: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsHTTPS, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, agentSelfSchemaPortsRPC: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsRPC, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, agentSelfSchemaPortsSerfLAN: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsSerfLAN, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, agentSelfSchemaPortsSerfWAN: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsSerfWAN, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, agentSelfSchemaPortsServer: { Computed: true, Type: schema.TypeInt, - ValidateFunc: makeValidationFunc(agentSelfSchemaPortsServer, validatorInputs{ - validateIntMin(1), - validateIntMax(65535), - }), }, }, }, @@ -516,16 +455,10 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfReconnectTimeoutLAN: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfReconnectTimeoutLAN, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfReconnectTimeoutWAN: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfReconnectTimeoutWAN, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfRejoinAfterLeave: { Computed: true, @@ -612,9 +545,6 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfSessionTTLMin: { Computed: true, Type: schema.TypeString, - ValidateFunc: makeValidationFunc(agentSelfSessionTTLMin, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfStartJoin: { Computed: true, @@ -702,9 +632,6 @@ func dataSourceConsulAgentSelf() *schema.Resource { agentSelfTelemetryCirconusSubmissionInterval: &schema.Schema{ Type: schema.TypeString, Computed: true, - ValidateFunc: makeValidationFunc(agentSelfTelemetryCirconusSubmissionInterval, validatorInputs{ - validateDurationMin("0ns"), - }), }, agentSelfTelemetryEnableHostname: &schema.Schema{ Type: schema.TypeString, diff --git a/builtin/providers/consul/data_source_consul_catalog_nodes.go b/builtin/providers/consul/data_source_consul_catalog_nodes.go index 666d71653..b93da423a 100644 --- a/builtin/providers/consul/data_source_consul_catalog_nodes.go +++ b/builtin/providers/consul/data_source_consul_catalog_nodes.go @@ -56,14 +56,12 @@ func dataSourceConsulCatalogNodes() *schema.Resource { Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ catalogNodesNodeID: &schema.Schema{ - Type: schema.TypeString, - Computed: true, - ValidateFunc: makeValidationFunc(catalogNodesNodeID, []interface{}{validateRegexp(`^[\S]+$`)}), + Type: schema.TypeString, + Computed: true, }, catalogNodesNodeName: &schema.Schema{ - Type: schema.TypeString, - Computed: true, - ValidateFunc: makeValidationFunc(catalogNodesNodeName, []interface{}{validateRegexp(`^[\S]+$`)}), + Type: schema.TypeString, + Computed: true, }, catalogNodesNodeAddress: &schema.Schema{ Type: schema.TypeString, diff --git a/builtin/providers/digitalocean/datasource_digitaloceal_image.go b/builtin/providers/digitalocean/datasource_digitaloceal_image.go new file mode 100644 index 000000000..d4023daf8 --- /dev/null +++ b/builtin/providers/digitalocean/datasource_digitaloceal_image.go @@ -0,0 +1,93 @@ +package digitalocean + +import ( + "fmt" + "strconv" + + "github.com/digitalocean/godo" + "github.com/hashicorp/terraform/helper/schema" +) + +func dataSourceDigitalOceanImage() *schema.Resource { + return &schema.Resource{ + Read: dataSourceDigitalOceanImageRead, + Schema: map[string]*schema.Schema{ + + "name": &schema.Schema{ + Type: schema.TypeString, + Required: true, + Description: "name of the image", + }, + // computed attributes + "image": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: "slug or id of the image", + }, + "min_disk_size": &schema.Schema{ + Type: schema.TypeInt, + Computed: true, + Description: "minimum disk size required by the image", + }, + "private": &schema.Schema{ + Type: schema.TypeBool, + Computed: true, + Description: "Is the image private or non-private", + }, + "regions": &schema.Schema{ + Type: schema.TypeList, + Computed: true, + Description: "list of the regions that the image is available in", + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "type": &schema.Schema{ + Type: schema.TypeString, + Computed: true, + Description: "type of the image", + }, + }, + } +} + +func dataSourceDigitalOceanImageRead(d *schema.ResourceData, meta interface{}) error { + client := meta.(*godo.Client) + + opts := &godo.ListOptions{} + + images, _, err := client.Images.ListUser(opts) + if err != nil { + d.SetId("") + return err + } + image, err := findImageByName(images, d.Get("name").(string)) + + if err != nil { + return err + } + + d.SetId(image.Name) + d.Set("name", image.Name) + d.Set("image", strconv.Itoa(image.ID)) + d.Set("min_disk_size", image.MinDiskSize) + d.Set("private", !image.Public) + d.Set("regions", image.Regions) + d.Set("type", image.Type) + + return nil +} + +func findImageByName(images []godo.Image, name string) (*godo.Image, error) { + results := make([]godo.Image, 0) + for _, v := range images { + if v.Name == name { + results = append(results, v) + } + } + if len(results) == 1 { + return &results[0], nil + } + if len(results) == 0 { + return nil, fmt.Errorf("no user image found with name %s", name) + } + return nil, fmt.Errorf("too many user images found with name %s (found %d, expected 1)", name, len(results)) +} diff --git a/builtin/providers/digitalocean/datasource_digitaloceal_image_test.go b/builtin/providers/digitalocean/datasource_digitaloceal_image_test.go new file mode 100644 index 000000000..ab77c75ae --- /dev/null +++ b/builtin/providers/digitalocean/datasource_digitaloceal_image_test.go @@ -0,0 +1,122 @@ +package digitalocean + +import ( + "fmt" + "log" + "regexp" + "testing" + + "github.com/digitalocean/godo" + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" +) + +func TestAccDigitalOceanImage_Basic(t *testing.T) { + var droplet godo.Droplet + var snapshotsId []int + rInt := acctest.RandInt() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDigitalOceanDropletDestroy, + Steps: []resource.TestStep{ + { + Config: testAccCheckDigitalOceanDropletConfig_basic(rInt), + Check: resource.ComposeTestCheckFunc( + testAccCheckDigitalOceanDropletExists("digitalocean_droplet.foobar", &droplet), + takeSnapshotsOfDroplet(rInt, &droplet, &snapshotsId), + ), + }, + { + Config: testAccCheckDigitalOceanImageConfig_basic(rInt, 1), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr( + "data.digitalocean_image.foobar", "name", fmt.Sprintf("snap-%d-1", rInt)), + resource.TestCheckResourceAttr( + "data.digitalocean_image.foobar", "min_disk_size", "20"), + resource.TestCheckResourceAttr( + "data.digitalocean_image.foobar", "private", "true"), + resource.TestCheckResourceAttr( + "data.digitalocean_image.foobar", "type", "snapshot"), + ), + }, + { + Config: testAccCheckDigitalOceanImageConfig_basic(rInt, 0), + ExpectError: regexp.MustCompile(`.*too many user images found with name snap-.*\ .found 2, expected 1.`), + }, + { + Config: testAccCheckDigitalOceanImageConfig_nonexisting(rInt), + Destroy: false, + ExpectError: regexp.MustCompile(`.*no user image found with name snap-.*-nonexisting`), + }, + { + Config: " ", + Check: resource.ComposeTestCheckFunc( + deleteSnapshots(&snapshotsId), + ), + }, + }, + }) +} + +func takeSnapshotsOfDroplet(rInt int, droplet *godo.Droplet, snapshotsId *[]int) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*godo.Client) + for i := 0; i < 3; i++ { + err := takeSnapshotOfDroplet(rInt, i%2, droplet) + if err != nil { + return err + } + } + retrieveDroplet, _, err := client.Droplets.Get((*droplet).ID) + if err != nil { + return err + } + *snapshotsId = retrieveDroplet.SnapshotIDs + return nil + } +} + +func takeSnapshotOfDroplet(rInt, sInt int, droplet *godo.Droplet) error { + client := testAccProvider.Meta().(*godo.Client) + action, _, err := client.DropletActions.Snapshot((*droplet).ID, fmt.Sprintf("snap-%d-%d", rInt, sInt)) + if err != nil { + return err + } + waitForAction(client, action) + return nil +} + +func deleteSnapshots(snapshotsId *[]int) resource.TestCheckFunc { + return func(s *terraform.State) error { + log.Printf("XXX Deleting snaps") + client := testAccProvider.Meta().(*godo.Client) + snapshots := *snapshotsId + for _, value := range snapshots { + log.Printf("XXX Deleting %d", value) + _, err := client.Images.Delete(value) + if err != nil { + return err + } + } + return nil + } +} + +func testAccCheckDigitalOceanImageConfig_basic(rInt, sInt int) string { + return fmt.Sprintf(` +data "digitalocean_image" "foobar" { + name = "snap-%d-%d" +} +`, rInt, sInt) +} + +func testAccCheckDigitalOceanImageConfig_nonexisting(rInt int) string { + return fmt.Sprintf(` +data "digitalocean_image" "foobar" { + name = "snap-%d-nonexisting" +} +`, rInt) +} diff --git a/builtin/providers/digitalocean/provider.go b/builtin/providers/digitalocean/provider.go index 5ab2cab43..e885e0823 100644 --- a/builtin/providers/digitalocean/provider.go +++ b/builtin/providers/digitalocean/provider.go @@ -17,6 +17,10 @@ func Provider() terraform.ResourceProvider { }, }, + DataSourcesMap: map[string]*schema.Resource{ + "digitalocean_image": dataSourceDigitalOceanImage(), + }, + ResourcesMap: map[string]*schema.Resource{ "digitalocean_domain": resourceDigitalOceanDomain(), "digitalocean_droplet": resourceDigitalOceanDroplet(), diff --git a/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go b/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go index be3bd10b7..3f813c953 100644 --- a/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go +++ b/builtin/providers/digitalocean/resource_digitalocean_droplet_test.go @@ -30,6 +30,10 @@ func TestAccDigitalOceanDroplet_Basic(t *testing.T) { "digitalocean_droplet.foobar", "name", fmt.Sprintf("foo-%d", rInt)), resource.TestCheckResourceAttr( "digitalocean_droplet.foobar", "size", "512mb"), + resource.TestCheckResourceAttr( + "digitalocean_droplet.foobar", "price_hourly", "0.00744"), + resource.TestCheckResourceAttr( + "digitalocean_droplet.foobar", "price_monthly", "5"), resource.TestCheckResourceAttr( "digitalocean_droplet.foobar", "image", "centos-7-x64"), resource.TestCheckResourceAttr( @@ -37,6 +41,11 @@ func TestAccDigitalOceanDroplet_Basic(t *testing.T) { resource.TestCheckResourceAttr( "digitalocean_droplet.foobar", "user_data", "foobar"), ), + Destroy: false, + }, + { + Config: testAccCheckDigitalOceanDropletConfig_basic(rInt), + PlanOnly: true, }, }, }) diff --git a/builtin/providers/fastly/resource_fastly_service_v1.go b/builtin/providers/fastly/resource_fastly_service_v1.go index 196948141..89d9218ad 100644 --- a/builtin/providers/fastly/resource_fastly_service_v1.go +++ b/builtin/providers/fastly/resource_fastly_service_v1.go @@ -647,6 +647,72 @@ func resourceServiceV1() *schema.Resource { }, }, + "gcslogging": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + // Required fields + "name": { + Type: schema.TypeString, + Required: true, + Description: "Unique name to refer to this logging setup", + }, + "email": { + Type: schema.TypeString, + Required: true, + Description: "The email address associated with the target GCS bucket on your account.", + }, + "bucket_name": { + Type: schema.TypeString, + Required: true, + Description: "The name of the bucket in which to store the logs.", + }, + "secret_key": { + Type: schema.TypeString, + Required: true, + Description: "The secret key associated with the target gcs bucket on your account.", + }, + // Optional fields + "path": { + Type: schema.TypeString, + Optional: true, + Description: "Path to store the files. Must end with a trailing slash", + }, + "gzip_level": { + Type: schema.TypeInt, + Optional: true, + Default: 0, + Description: "Gzip Compression level", + }, + "period": { + Type: schema.TypeInt, + Optional: true, + Default: 3600, + Description: "How frequently the logs should be transferred, in seconds (Default 3600)", + }, + "format": { + Type: schema.TypeString, + Optional: true, + Default: "%h %l %u %t %r %>s", + Description: "Apache-style string or VCL variables to use for log formatting", + }, + "timestamp_format": { + Type: schema.TypeString, + Optional: true, + Default: "%Y-%m-%dT%H:%M:%S.000", + Description: "specified timestamp formatting (default `%Y-%m-%dT%H:%M:%S.000`)", + }, + "response_condition": { + Type: schema.TypeString, + Optional: true, + Default: "", + Description: "Name of a condition to apply this logging.", + }, + }, + }, + }, + "response_object": { Type: schema.TypeSet, Optional: true, @@ -1450,6 +1516,59 @@ func resourceServiceV1Update(d *schema.ResourceData, meta interface{}) error { } } + // find difference in gcslogging + if d.HasChange("gcslogging") { + os, ns := d.GetChange("gcslogging") + if os == nil { + os = new(schema.Set) + } + if ns == nil { + ns = new(schema.Set) + } + + oss := os.(*schema.Set) + nss := ns.(*schema.Set) + removeGcslogging := oss.Difference(nss).List() + addGcslogging := nss.Difference(oss).List() + + // DELETE old gcslogging configurations + for _, pRaw := range removeGcslogging { + sf := pRaw.(map[string]interface{}) + opts := gofastly.DeleteGCSInput{ + Service: d.Id(), + Version: latestVersion, + Name: sf["name"].(string), + } + + log.Printf("[DEBUG] Fastly gcslogging removal opts: %#v", opts) + err := conn.DeleteGCS(&opts) + if err != nil { + return err + } + } + + // POST new/updated gcslogging + for _, pRaw := range addGcslogging { + sf := pRaw.(map[string]interface{}) + opts := gofastly.CreateGCSInput{ + Service: d.Id(), + Version: latestVersion, + Name: sf["name"].(string), + User: sf["email"].(string), + Bucket: sf["bucket_name"].(string), + SecretKey: sf["secret_key"].(string), + Format: sf["format"].(string), + ResponseCondition: sf["response_condition"].(string), + } + + log.Printf("[DEBUG] Create GCS Opts: %#v", opts) + _, err := conn.CreateGCS(&opts) + if err != nil { + return err + } + } + } + // find difference in Response Object if d.HasChange("response_object") { or, nr := d.GetChange("response_object") @@ -1883,6 +2002,22 @@ func resourceServiceV1Read(d *schema.ResourceData, meta interface{}) error { log.Printf("[WARN] Error setting Sumologic for (%s): %s", d.Id(), err) } + // refresh GCS Logging + log.Printf("[DEBUG] Refreshing GCS for (%s)", d.Id()) + GCSList, err := conn.ListGCSs(&gofastly.ListGCSsInput{ + Service: d.Id(), + Version: s.ActiveVersion.Number, + }) + + if err != nil { + return fmt.Errorf("[ERR] Error looking up GCS for (%s), version (%s): %s", d.Id(), s.ActiveVersion.Number, err) + } + + gcsl := flattenGCS(GCSList) + if err := d.Set("gcs", gcsl); err != nil { + log.Printf("[WARN] Error setting gcs for (%s): %s", d.Id(), err) + } + // refresh Response Objects log.Printf("[DEBUG] Refreshing Response Object for (%s)", d.Id()) responseObjectList, err := conn.ListResponseObjects(&gofastly.ListResponseObjectsInput{ @@ -2350,6 +2485,35 @@ func flattenSumologics(sumologicList []*gofastly.Sumologic) []map[string]interfa return l } +func flattenGCS(gcsList []*gofastly.GCS) []map[string]interface{} { + var GCSList []map[string]interface{} + for _, currentGCS := range gcsList { + // Convert gcs to a map for saving to state. + GCSMapString := map[string]interface{}{ + "name": currentGCS.Name, + "email": currentGCS.User, + "bucket_name": currentGCS.Bucket, + "secret_key": currentGCS.SecretKey, + "path": currentGCS.Path, + "period": int(currentGCS.Period), + "gzip_level": int(currentGCS.GzipLevel), + "response_condition": currentGCS.ResponseCondition, + "format": currentGCS.Format, + } + + // prune any empty values that come from the default string value in structs + for k, v := range GCSMapString { + if v == "" { + delete(GCSMapString, k) + } + } + + GCSList = append(GCSList, GCSMapString) + } + + return GCSList +} + func flattenResponseObjects(responseObjectList []*gofastly.ResponseObject) []map[string]interface{} { var rol []map[string]interface{} for _, ro := range responseObjectList { diff --git a/builtin/providers/fastly/resource_fastly_service_v1_gcslogging_test.go b/builtin/providers/fastly/resource_fastly_service_v1_gcslogging_test.go new file mode 100644 index 000000000..ed777bf02 --- /dev/null +++ b/builtin/providers/fastly/resource_fastly_service_v1_gcslogging_test.go @@ -0,0 +1,131 @@ +package fastly + +import ( + "fmt" + "reflect" + "testing" + + "github.com/hashicorp/terraform/helper/acctest" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + gofastly "github.com/sethvargo/go-fastly" +) + +func TestResourceFastlyFlattenGCS(t *testing.T) { + cases := []struct { + remote []*gofastly.GCS + local []map[string]interface{} + }{ + { + remote: []*gofastly.GCS{ + &gofastly.GCS{ + Name: "GCS collector", + User: "email@example.com", + Bucket: "bucketName", + SecretKey: "secretKey", + Format: "log format", + Period: 3600, + GzipLevel: 0, + }, + }, + local: []map[string]interface{}{ + map[string]interface{}{ + "name": "GCS collector", + "email": "email@example.com", + "bucket_name": "bucketName", + "secret_key": "secretKey", + "format": "log format", + "period": 3600, + "gzip_level": 0, + }, + }, + }, + } + + for _, c := range cases { + out := flattenGCS(c.remote) + if !reflect.DeepEqual(out, c.local) { + t.Fatalf("Error matching:\nexpected: %#v\ngot: %#v", c.local, out) + } + } +} + +func TestAccFastlyServiceV1_gcslogging(t *testing.T) { + var service gofastly.ServiceDetail + name := fmt.Sprintf("tf-test-%s", acctest.RandString(10)) + gcsName := fmt.Sprintf("gcs %s", acctest.RandString(10)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckServiceV1Destroy, + Steps: []resource.TestStep{ + { + Config: testAccServiceV1Config_gcs(name, gcsName), + Check: resource.ComposeTestCheckFunc( + testAccCheckServiceV1Exists("fastly_service_v1.foo", &service), + testAccCheckFastlyServiceV1Attributes_gcs(&service, name, gcsName), + ), + }, + }, + }) +} + +func testAccCheckFastlyServiceV1Attributes_gcs(service *gofastly.ServiceDetail, name, gcsName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if service.Name != name { + return fmt.Errorf("Bad name, expected (%s), got (%s)", name, service.Name) + } + + conn := testAccProvider.Meta().(*FastlyClient).conn + gcsList, err := conn.ListGCSs(&gofastly.ListGCSsInput{ + Service: service.ID, + Version: service.ActiveVersion.Number, + }) + + if err != nil { + return fmt.Errorf("[ERR] Error looking up GCSs for (%s), version (%s): %s", service.Name, service.ActiveVersion.Number, err) + } + + if len(gcsList) != 1 { + return fmt.Errorf("GCS missing, expected: 1, got: %d", len(gcsList)) + } + + if gcsList[0].Name != gcsName { + return fmt.Errorf("GCS name mismatch, expected: %s, got: %#v", gcsName, gcsList[0].Name) + } + + return nil + } +} + +func testAccServiceV1Config_gcs(name, gcsName string) string { + backendName := fmt.Sprintf("%s.aws.amazon.com", acctest.RandString(3)) + + return fmt.Sprintf(` +resource "fastly_service_v1" "foo" { + name = "%s" + + domain { + name = "test.notadomain.com" + comment = "tf-testing-domain" + } + + backend { + address = "%s" + name = "tf -test backend" + } + + gcslogging { + name = "%s" + email = "email@example.com", + bucket_name = "bucketName", + secret_key = "secretKey", + format = "log format", + response_condition = "", + } + + force_destroy = true +}`, name, backendName, gcsName) +} diff --git a/builtin/providers/github/resource_github_branch_protection.go b/builtin/providers/github/resource_github_branch_protection.go index bb9f5ba2d..203283576 100644 --- a/builtin/providers/github/resource_github_branch_protection.go +++ b/builtin/providers/github/resource_github_branch_protection.go @@ -3,6 +3,7 @@ package github import ( "context" "errors" + "net/http" "github.com/google/go-github/github" "github.com/hashicorp/terraform/helper/schema" @@ -117,8 +118,12 @@ func resourceGithubBranchProtectionRead(d *schema.ResourceData, meta interface{} githubProtection, _, err := client.Repositories.GetBranchProtection(context.TODO(), meta.(*Organization).name, r, b) if err != nil { - d.SetId("") - return nil + if err, ok := err.(*github.ErrorResponse); ok && err.Response.StatusCode == http.StatusNotFound { + d.SetId("") + return nil + } + + return err } d.Set("repository", r) diff --git a/builtin/providers/google/import_compute_network_test.go b/builtin/providers/google/import_compute_network_test.go new file mode 100644 index 000000000..8e6ab769b --- /dev/null +++ b/builtin/providers/google/import_compute_network_test.go @@ -0,0 +1,65 @@ +package google + +import ( + "testing" + + "github.com/hashicorp/terraform/helper/resource" +) + +func TestAccComputeNetwork_importBasic(t *testing.T) { + resourceName := "google_compute_network.foobar" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeNetworkDestroy, + Steps: []resource.TestStep{ + { + Config: testAccComputeNetwork_basic, + }, { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + //ImportStateVerifyIgnore: []string{"ipv4_range", "name"}, + }, + }, + }) +} + +func TestAccComputeNetwork_importAuto_subnet(t *testing.T) { + resourceName := "google_compute_network.bar" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeNetworkDestroy, + Steps: []resource.TestStep{ + { + Config: testAccComputeNetwork_auto_subnet, + }, { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccComputeNetwork_importCustom_subnet(t *testing.T) { + resourceName := "google_compute_network.baz" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckComputeNetworkDestroy, + Steps: []resource.TestStep{ + { + Config: testAccComputeNetwork_custom_subnet, + }, { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} diff --git a/builtin/providers/google/resource_compute_forwarding_rule.go b/builtin/providers/google/resource_compute_forwarding_rule.go index b4bd4a779..99684560d 100644 --- a/builtin/providers/google/resource_compute_forwarding_rule.go +++ b/builtin/providers/google/resource_compute_forwarding_rule.go @@ -88,6 +88,7 @@ func resourceComputeForwardingRule() *schema.Resource { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, + ForceNew: true, Set: schema.HashString, }, diff --git a/builtin/providers/google/resource_compute_network.go b/builtin/providers/google/resource_compute_network.go index 3356edcc8..ccd75ae08 100644 --- a/builtin/providers/google/resource_compute_network.go +++ b/builtin/providers/google/resource_compute_network.go @@ -14,6 +14,9 @@ func resourceComputeNetwork() *schema.Resource { Create: resourceComputeNetworkCreate, Read: resourceComputeNetworkRead, Delete: resourceComputeNetworkDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, Schema: map[string]*schema.Schema{ "name": &schema.Schema{ @@ -142,6 +145,9 @@ func resourceComputeNetworkRead(d *schema.ResourceData, meta interface{}) error d.Set("gateway_ipv4", network.GatewayIPv4) d.Set("self_link", network.SelfLink) + d.Set("ipv4_range", network.IPv4Range) + d.Set("name", network.Name) + d.Set("auto_create_subnetworks", network.AutoCreateSubnetworks) return nil } diff --git a/builtin/providers/heroku/resource_heroku_app.go b/builtin/providers/heroku/resource_heroku_app.go index 18b0dc668..93efa6ada 100644 --- a/builtin/providers/heroku/resource_heroku_app.go +++ b/builtin/providers/heroku/resource_heroku_app.go @@ -31,6 +31,7 @@ type application struct { App *herokuApplication // The heroku application Client *heroku.Service // Client to interact with the heroku API Vars map[string]string // The vars on the application + Buildpacks []string // The application's buildpack names or URLs Organization bool // is the application organization app } @@ -75,6 +76,11 @@ func (a *application) Update() error { } } + a.Buildpacks, err = retrieveBuildpacks(a.Id, a.Client) + if err != nil { + errs = append(errs, err) + } + a.Vars, err = retrieveConfigVars(a.Id, a.Client) if err != nil { errs = append(errs, err) @@ -119,6 +125,14 @@ func resourceHerokuApp() *schema.Resource { ForceNew: true, }, + "buildpacks": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "config_vars": { Type: schema.TypeList, Optional: true, @@ -225,6 +239,10 @@ func resourceHerokuAppCreate(d *schema.ResourceData, meta interface{}) error { } } + if v, ok := d.GetOk("buildpacks"); ok { + err = updateBuildpacks(d.Id(), client, v.([]interface{})) + } + return resourceHerokuAppRead(d, meta) } @@ -308,6 +326,9 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error { } } + // Only track buildpacks when set in the configuration. + _, buildpacksConfigured := d.GetOk("buildpacks") + organizationApp := isOrganizationApp(d) // Only set the config_vars that we have set in the configuration. @@ -332,6 +353,9 @@ func resourceHerokuAppRead(d *schema.ResourceData, meta interface{}) error { d.Set("region", app.App.Region) d.Set("git_url", app.App.GitURL) d.Set("web_url", app.App.WebURL) + if buildpacksConfigured { + d.Set("buildpacks", app.Buildpacks) + } d.Set("config_vars", configVarsValue) d.Set("all_config_vars", app.Vars) if organizationApp { @@ -391,6 +415,13 @@ func resourceHerokuAppUpdate(d *schema.ResourceData, meta interface{}) error { } } + if d.HasChange("buildpacks") { + err := updateBuildpacks(d.Id(), client, d.Get("buildpacks").([]interface{})) + if err != nil { + return err + } + } + return resourceHerokuAppRead(d, meta) } @@ -419,6 +450,21 @@ func resourceHerokuAppRetrieve(id string, organization bool, client *heroku.Serv return &app, nil } +func retrieveBuildpacks(id string, client *heroku.Service) ([]string, error) { + results, err := client.BuildpackInstallationList(context.TODO(), id, nil) + + if err != nil { + return nil, err + } + + buildpacks := []string{} + for _, installation := range results { + buildpacks = append(buildpacks, installation.Buildpack.Name) + } + + return buildpacks, nil +} + func retrieveConfigVars(id string, client *heroku.Service) (map[string]string, error) { vars, err := client.ConfigVarInfoForApp(context.TODO(), id) @@ -467,3 +513,24 @@ func updateConfigVars( return nil } + +func updateBuildpacks(id string, client *heroku.Service, v []interface{}) error { + opts := heroku.BuildpackInstallationUpdateOpts{ + Updates: []struct { + Buildpack string `json:"buildpack" url:"buildpack,key"` + }{}} + + for _, buildpack := range v { + opts.Updates = append(opts.Updates, struct { + Buildpack string `json:"buildpack" url:"buildpack,key"` + }{ + Buildpack: buildpack.(string), + }) + } + + if _, err := client.BuildpackInstallationUpdate(context.TODO(), id, opts); err != nil { + return fmt.Errorf("Error updating buildpacks: %s", err) + } + + return nil +} diff --git a/builtin/providers/heroku/resource_heroku_app_test.go b/builtin/providers/heroku/resource_heroku_app_test.go index e807cd83f..5888271af 100644 --- a/builtin/providers/heroku/resource_heroku_app_test.go +++ b/builtin/providers/heroku/resource_heroku_app_test.go @@ -109,6 +109,75 @@ func TestAccHerokuApp_NukeVars(t *testing.T) { }) } +func TestAccHerokuApp_Buildpacks(t *testing.T) { + var app heroku.AppInfoResult + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckHerokuAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccCheckHerokuAppConfig_go(appName), + Check: resource.ComposeTestCheckFunc( + testAccCheckHerokuAppExists("heroku_app.foobar", &app), + testAccCheckHerokuAppBuildpacks(appName, false), + resource.TestCheckResourceAttr("heroku_app.foobar", "buildpacks.0", "heroku/go"), + ), + }, + { + Config: testAccCheckHerokuAppConfig_multi(appName), + Check: resource.ComposeTestCheckFunc( + testAccCheckHerokuAppExists("heroku_app.foobar", &app), + testAccCheckHerokuAppBuildpacks(appName, true), + resource.TestCheckResourceAttr( + "heroku_app.foobar", "buildpacks.0", "https://github.com/heroku/heroku-buildpack-multi-procfile"), + resource.TestCheckResourceAttr("heroku_app.foobar", "buildpacks.1", "heroku/go"), + ), + }, + { + Config: testAccCheckHerokuAppConfig_no_vars(appName), + Check: resource.ComposeTestCheckFunc( + testAccCheckHerokuAppExists("heroku_app.foobar", &app), + testAccCheckHerokuAppNoBuildpacks(appName), + resource.TestCheckNoResourceAttr("heroku_app.foobar", "buildpacks.0"), + ), + }, + }, + }) +} + +func TestAccHerokuApp_ExternallySetBuildpacks(t *testing.T) { + var app heroku.AppInfoResult + appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckHerokuAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccCheckHerokuAppConfig_no_vars(appName), + Check: resource.ComposeTestCheckFunc( + testAccCheckHerokuAppExists("heroku_app.foobar", &app), + testAccCheckHerokuAppNoBuildpacks(appName), + resource.TestCheckNoResourceAttr("heroku_app.foobar", "buildpacks.0"), + ), + }, + { + PreConfig: testAccInstallUnconfiguredBuildpack(t, appName), + Config: testAccCheckHerokuAppConfig_no_vars(appName), + Check: resource.ComposeTestCheckFunc( + testAccCheckHerokuAppExists("heroku_app.foobar", &app), + testAccCheckHerokuAppBuildpacks(appName, false), + resource.TestCheckNoResourceAttr("heroku_app.foobar", "buildpacks.0"), + ), + }, + }, + }) +} + func TestAccHerokuApp_Organization(t *testing.T) { var app heroku.OrganizationApp appName := fmt.Sprintf("tftest-%s", acctest.RandString(10)) @@ -260,6 +329,59 @@ func testAccCheckHerokuAppAttributesNoVars(app *heroku.AppInfoResult, appName st } } +func testAccCheckHerokuAppBuildpacks(appName string, multi bool) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*heroku.Service) + + results, err := client.BuildpackInstallationList(context.TODO(), appName, nil) + if err != nil { + return err + } + + buildpacks := []string{} + for _, installation := range results { + buildpacks = append(buildpacks, installation.Buildpack.Name) + } + + if multi { + herokuMulti := "https://github.com/heroku/heroku-buildpack-multi-procfile" + if len(buildpacks) != 2 || buildpacks[0] != herokuMulti || buildpacks[1] != "heroku/go" { + return fmt.Errorf("Bad buildpacks: %v", buildpacks) + } + + return nil + } + + if len(buildpacks) != 1 || buildpacks[0] != "heroku/go" { + return fmt.Errorf("Bad buildpacks: %v", buildpacks) + } + + return nil + } +} + +func testAccCheckHerokuAppNoBuildpacks(appName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + client := testAccProvider.Meta().(*heroku.Service) + + results, err := client.BuildpackInstallationList(context.TODO(), appName, nil) + if err != nil { + return err + } + + buildpacks := []string{} + for _, installation := range results { + buildpacks = append(buildpacks, installation.Buildpack.Name) + } + + if len(buildpacks) != 0 { + return fmt.Errorf("Bad buildpacks: %v", buildpacks) + } + + return nil + } +} + func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName, space, org string) resource.TestCheckFunc { return func(s *terraform.State) error { client := testAccProvider.Meta().(*heroku.Service) @@ -362,6 +484,25 @@ func testAccCheckHerokuAppExistsOrg(n string, app *heroku.OrganizationApp) resou } } +func testAccInstallUnconfiguredBuildpack(t *testing.T, appName string) func() { + return func() { + client := testAccProvider.Meta().(*heroku.Service) + + opts := heroku.BuildpackInstallationUpdateOpts{ + Updates: []struct { + Buildpack string `json:"buildpack" url:"buildpack,key"` + }{ + {Buildpack: "heroku/go"}, + }, + } + + _, err := client.BuildpackInstallationUpdate(context.TODO(), appName, opts) + if err != nil { + t.Fatalf("Error updating buildpacks: %s", err) + } + } +} + func testAccCheckHerokuAppConfig_basic(appName string) string { return fmt.Sprintf(` resource "heroku_app" "foobar" { @@ -374,6 +515,29 @@ resource "heroku_app" "foobar" { }`, appName) } +func testAccCheckHerokuAppConfig_go(appName string) string { + return fmt.Sprintf(` +resource "heroku_app" "foobar" { + name = "%s" + region = "us" + + buildpacks = ["heroku/go"] +}`, appName) +} + +func testAccCheckHerokuAppConfig_multi(appName string) string { + return fmt.Sprintf(` +resource "heroku_app" "foobar" { + name = "%s" + region = "us" + + buildpacks = [ + "https://github.com/heroku/heroku-buildpack-multi-procfile", + "heroku/go" + ] +}`, appName) +} + func testAccCheckHerokuAppConfig_updated(appName string) string { return fmt.Sprintf(` resource "heroku_app" "foobar" { diff --git a/builtin/providers/ns1/resource_record_test.go b/builtin/providers/ns1/resource_record_test.go index 36095b579..ec5075303 100644 --- a/builtin/providers/ns1/resource_record_test.go +++ b/builtin/providers/ns1/resource_record_test.go @@ -29,7 +29,7 @@ func TestAccRecord_basic(t *testing.T) { testAccCheckRecordUseClientSubnet(&record, true), testAccCheckRecordRegionName(&record, []string{"cal"}), // testAccCheckRecordAnswerMetaWeight(&record, 10), - testAccCheckRecordAnswerRdata(&record, "test1.terraform-record-test.io"), + testAccCheckRecordAnswerRdata(&record, 0, "test1.terraform-record-test.io"), ), }, }, @@ -52,7 +52,7 @@ func TestAccRecord_updated(t *testing.T) { testAccCheckRecordUseClientSubnet(&record, true), testAccCheckRecordRegionName(&record, []string{"cal"}), // testAccCheckRecordAnswerMetaWeight(&record, 10), - testAccCheckRecordAnswerRdata(&record, "test1.terraform-record-test.io"), + testAccCheckRecordAnswerRdata(&record, 0, "test1.terraform-record-test.io"), ), }, resource.TestStep{ @@ -64,7 +64,7 @@ func TestAccRecord_updated(t *testing.T) { testAccCheckRecordUseClientSubnet(&record, false), testAccCheckRecordRegionName(&record, []string{"ny", "wa"}), // testAccCheckRecordAnswerMetaWeight(&record, 5), - testAccCheckRecordAnswerRdata(&record, "test2.terraform-record-test.io"), + testAccCheckRecordAnswerRdata(&record, 0, "test2.terraform-record-test.io"), ), }, }, @@ -85,7 +85,31 @@ func TestAccRecord_SPF(t *testing.T) { testAccCheckRecordDomain(&record, "terraform-record-test.io"), testAccCheckRecordTTL(&record, 86400), testAccCheckRecordUseClientSubnet(&record, true), - testAccCheckRecordAnswerRdata(&record, "v=DKIM1; k=rsa; p=XXXXXXXX"), + testAccCheckRecordAnswerRdata(&record, 0, "v=DKIM1; k=rsa; p=XXXXXXXX"), + ), + }, + }, + }) +} + +func TestAccRecord_SRV(t *testing.T) { + var record dns.Record + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccCheckRecordDestroy, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccRecordSRV, + Check: resource.ComposeTestCheckFunc( + testAccCheckRecordExists("ns1_record.srv", &record), + testAccCheckRecordDomain(&record, "_some-server._tcp.terraform-record-test.io"), + testAccCheckRecordTTL(&record, 86400), + testAccCheckRecordUseClientSubnet(&record, true), + testAccCheckRecordAnswerRdata(&record, 0, "10"), + testAccCheckRecordAnswerRdata(&record, 1, "0"), + testAccCheckRecordAnswerRdata(&record, 2, "2380"), + testAccCheckRecordAnswerRdata(&record, 3, "node-1.terraform-record-test.io"), ), }, }, @@ -206,12 +230,12 @@ func testAccCheckRecordAnswerMetaWeight(r *dns.Record, expected float64) resourc } } -func testAccCheckRecordAnswerRdata(r *dns.Record, expected string) resource.TestCheckFunc { +func testAccCheckRecordAnswerRdata(r *dns.Record, idx int, expected string) resource.TestCheckFunc { return func(s *terraform.State) error { recordAnswer := r.Answers[0] - recordAnswerString := recordAnswer.Rdata[0] + recordAnswerString := recordAnswer.Rdata[idx] if recordAnswerString != expected { - return fmt.Errorf("Answers[0].Rdata[0]: got: %#v want: %#v", recordAnswerString, expected) + return fmt.Errorf("Answers[0].Rdata[%d]: got: %#v want: %#v", idx, recordAnswerString, expected) } return nil } @@ -335,3 +359,20 @@ resource "ns1_zone" "test" { zone = "terraform-record-test.io" } ` + +const testAccRecordSRV = ` +resource "ns1_record" "srv" { + zone = "${ns1_zone.test.zone}" + domain = "_some-server._tcp.${ns1_zone.test.zone}" + type = "SRV" + ttl = 86400 + use_client_subnet = "true" + answers { + answer = "10 0 2380 node-1.${ns1_zone.test.zone}" + } +} + +resource "ns1_zone" "test" { + zone = "terraform-record-test.io" +} +` diff --git a/builtin/providers/oneandone/config.go b/builtin/providers/oneandone/config.go new file mode 100644 index 000000000..1192c84e7 --- /dev/null +++ b/builtin/providers/oneandone/config.go @@ -0,0 +1,24 @@ +package oneandone + +import ( + "github.com/1and1/oneandone-cloudserver-sdk-go" +) + +type Config struct { + Token string + Retries int + Endpoint string + API *oneandone.API +} + +func (c *Config) Client() (*Config, error) { + token := oneandone.SetToken(c.Token) + + if len(c.Endpoint) > 0 { + c.API = oneandone.New(token, c.Endpoint) + } else { + c.API = oneandone.New(token, oneandone.BaseUrl) + } + + return c, nil +} diff --git a/builtin/providers/oneandone/provider.go b/builtin/providers/oneandone/provider.go new file mode 100644 index 000000000..8cc65f19b --- /dev/null +++ b/builtin/providers/oneandone/provider.go @@ -0,0 +1,56 @@ +package oneandone + +import ( + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +func Provider() terraform.ResourceProvider { + return &schema.Provider{ + Schema: map[string]*schema.Schema{ + "token": { + Type: schema.TypeString, + Required: true, + DefaultFunc: schema.EnvDefaultFunc("ONEANDONE_TOKEN", nil), + Description: "1&1 token for API operations.", + }, + "retries": { + Type: schema.TypeInt, + Optional: true, + Default: 50, + DefaultFunc: schema.EnvDefaultFunc("ONEANDONE_RETRIES", nil), + }, + "endpoint": { + Type: schema.TypeString, + Optional: true, + Default: oneandone.BaseUrl, + DefaultFunc: schema.EnvDefaultFunc("ONEANDONE_ENDPOINT", nil), + }, + }, + ResourcesMap: map[string]*schema.Resource{ + "oneandone_server": resourceOneandOneServer(), + "oneandone_firewall_policy": resourceOneandOneFirewallPolicy(), + "oneandone_private_network": resourceOneandOnePrivateNetwork(), + "oneandone_public_ip": resourceOneandOnePublicIp(), + "oneandone_shared_storage": resourceOneandOneSharedStorage(), + "oneandone_monitoring_policy": resourceOneandOneMonitoringPolicy(), + "oneandone_loadbalancer": resourceOneandOneLoadbalancer(), + "oneandone_vpn": resourceOneandOneVPN(), + }, + ConfigureFunc: providerConfigure, + } +} + +func providerConfigure(d *schema.ResourceData) (interface{}, error) { + var endpoint string + if d.Get("endpoint").(string) != oneandone.BaseUrl { + endpoint = d.Get("endpoint").(string) + } + config := Config{ + Token: d.Get("token").(string), + Retries: d.Get("retries").(int), + Endpoint: endpoint, + } + return config.Client() +} diff --git a/builtin/providers/oneandone/provider_test.go b/builtin/providers/oneandone/provider_test.go new file mode 100644 index 000000000..2057aac6d --- /dev/null +++ b/builtin/providers/oneandone/provider_test.go @@ -0,0 +1,36 @@ +package oneandone + +import ( + "os" + "testing" + + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/terraform" +) + +// +var testAccProviders map[string]terraform.ResourceProvider +var testAccProvider *schema.Provider + +func init() { + testAccProvider = Provider().(*schema.Provider) + testAccProviders = map[string]terraform.ResourceProvider{ + "oneandone": testAccProvider, + } +} + +func TestProvider(t *testing.T) { + if err := Provider().(*schema.Provider).InternalValidate(); err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvider_impl(t *testing.T) { + var _ terraform.ResourceProvider = Provider() +} + +func testAccPreCheck(t *testing.T) { + if v := os.Getenv("ONEANDONE_TOKEN"); v == "" { + t.Fatal("ONEANDONE_TOKEN must be set for acceptance tests") + } +} diff --git a/builtin/providers/oneandone/resource_oneandone_firewall_policy.go b/builtin/providers/oneandone/resource_oneandone_firewall_policy.go new file mode 100644 index 000000000..c62b63b5c --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_firewall_policy.go @@ -0,0 +1,274 @@ +package oneandone + +import ( + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" + "strings" +) + +func resourceOneandOneFirewallPolicy() *schema.Resource { + return &schema.Resource{ + + Create: resourceOneandOneFirewallCreate, + Read: resourceOneandOneFirewallRead, + Update: resourceOneandOneFirewallUpdate, + Delete: resourceOneandOneFirewallDelete, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "rules": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "protocol": { + Type: schema.TypeString, + Required: true, + }, + "port_from": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 65535), + }, + "port_to": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 65535), + }, + "source_ip": { + Type: schema.TypeString, + Optional: true, + }, + "id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Required: true, + }, + }, + } +} + +func resourceOneandOneFirewallCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + req := oneandone.FirewallPolicyRequest{ + Name: d.Get("name").(string), + } + + if desc, ok := d.GetOk("description"); ok { + req.Description = desc.(string) + } + + req.Rules = getRules(d) + + fw_id, fw, err := config.API.CreateFirewallPolicy(&req) + if err != nil { + return err + } + + err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + + d.SetId(fw_id) + + if err != nil { + return err + } + + return resourceOneandOneFirewallRead(d, meta) +} + +func resourceOneandOneFirewallUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + if d.HasChange("name") || d.HasChange("description") { + fw, err := config.API.UpdateFirewallPolicy(d.Id(), d.Get("name").(string), d.Get("description").(string)) + if err != nil { + return err + } + err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + } + + if d.HasChange("rules") { + oldR, newR := d.GetChange("rules") + oldValues := oldR.([]interface{}) + newValues := newR.([]interface{}) + if len(oldValues) > len(newValues) { + diff := difference(oldValues, newValues) + for _, old := range diff { + o := old.(map[string]interface{}) + if o["id"] != nil { + old_id := o["id"].(string) + fw, err := config.API.DeleteFirewallPolicyRule(d.Id(), old_id) + if err != nil { + return err + } + + err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + } + } + } else { + var rules []oneandone.FirewallPolicyRule + + for _, raw := range newValues { + rl := raw.(map[string]interface{}) + + if rl["id"].(string) == "" { + rule := oneandone.FirewallPolicyRule{ + Protocol: rl["protocol"].(string), + } + + if rl["port_from"] != nil { + rule.PortFrom = oneandone.Int2Pointer(rl["port_from"].(int)) + } + if rl["port_to"] != nil { + rule.PortTo = oneandone.Int2Pointer(rl["port_to"].(int)) + } + + if rl["source_ip"] != nil { + rule.SourceIp = rl["source_ip"].(string) + } + + rules = append(rules, rule) + } + } + + if len(rules) > 0 { + fw, err := config.API.AddFirewallPolicyRules(d.Id(), rules) + if err != nil { + return err + } + + err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries) + } + } + } + + return resourceOneandOneFirewallRead(d, meta) +} + +func resourceOneandOneFirewallRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + fw, err := config.API.GetFirewallPolicy(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + d.Set("rules", readRules(d, fw.Rules)) + d.Set("description", fw.Description) + + return nil +} + +func resourceOneandOneFirewallDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + fp, err := config.API.DeleteFirewallPolicy(d.Id()) + if err != nil { + return err + } + + err = config.API.WaitUntilDeleted(fp) + if err != nil { + return err + } + + return nil +} + +func readRules(d *schema.ResourceData, rules []oneandone.FirewallPolicyRule) interface{} { + rawRules := d.Get("rules").([]interface{}) + counter := 0 + for _, rR := range rawRules { + if len(rules) > counter { + rawMap := rR.(map[string]interface{}) + rawMap["id"] = rules[counter].Id + if rules[counter].SourceIp != "0.0.0.0" { + rawMap["source_ip"] = rules[counter].SourceIp + } + } + counter++ + } + + return rawRules +} + +func getRules(d *schema.ResourceData) []oneandone.FirewallPolicyRule { + var rules []oneandone.FirewallPolicyRule + + if raw, ok := d.GetOk("rules"); ok { + rawRules := raw.([]interface{}) + + for _, raw := range rawRules { + rl := raw.(map[string]interface{}) + rule := oneandone.FirewallPolicyRule{ + Protocol: rl["protocol"].(string), + } + + if rl["port_from"] != nil { + rule.PortFrom = oneandone.Int2Pointer(rl["port_from"].(int)) + } + if rl["port_to"] != nil { + rule.PortTo = oneandone.Int2Pointer(rl["port_to"].(int)) + } + + if rl["source_ip"] != nil { + rule.SourceIp = rl["source_ip"].(string) + } + + rules = append(rules, rule) + } + } + return rules +} + +func difference(oldV, newV []interface{}) (toreturn []interface{}) { + var ( + lenMin int + longest []interface{} + ) + // Determine the shortest length and the longest slice + if len(oldV) < len(newV) { + lenMin = len(oldV) + longest = newV + } else { + lenMin = len(newV) + longest = oldV + } + // compare common indeces + for i := 0; i < lenMin; i++ { + if oldV[i] == nil || newV[i] == nil { + continue + } + if oldV[i].(map[string]interface{})["id"] != newV[i].(map[string]interface{})["id"] { + toreturn = append(toreturn, newV) //out += fmt.Sprintf("=>\t%s\t%s\n", oldV[i], newV[i]) + } + } + // add indeces not in common + for _, v := range longest[lenMin:] { + //out += fmt.Sprintf("=>\t%s\n", v) + toreturn = append(toreturn, v) + } + return toreturn +} diff --git a/builtin/providers/oneandone/resource_oneandone_firewall_policy_test.go b/builtin/providers/oneandone/resource_oneandone_firewall_policy_test.go new file mode 100644 index 000000000..146d63c3c --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_firewall_policy_test.go @@ -0,0 +1,178 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandoneFirewall_Basic(t *testing.T) { + var firewall oneandone.FirewallPolicy + + name := "test" + name_updated := "test1" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandoneFirewallDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneFirewall_basic, name), + + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneFirewallExists("oneandone_firewall_policy.fw", &firewall), + testAccCheckOneandoneFirewallAttributes("oneandone_firewall_policy.fw", name), + resource.TestCheckResourceAttr("oneandone_firewall_policy.fw", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneFirewall_update, name_updated), + + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneFirewallExists("oneandone_firewall_policy.fw", &firewall), + testAccCheckOneandoneFirewallAttributes("oneandone_firewall_policy.fw", name_updated), + resource.TestCheckResourceAttr("oneandone_firewall_policy.fw", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandoneFirewallDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_firewall_policy.fw" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetFirewallPolicy(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("Firewall Policy still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandoneFirewallAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandoneFirewallExists(n string, fw_p *oneandone.FirewallPolicy) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_fw, err := api.GetFirewallPolicy(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching Firewall Policy: %s", rs.Primary.ID) + } + if found_fw.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + fw_p = found_fw + + return nil + } +} + +const testAccCheckOneandoneFirewall_basic = ` +resource "oneandone_firewall_policy" "fw" { + name = "%s" + rules = [ + { + "protocol" = "TCP" + "port_from" = 80 + "port_to" = 80 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "ICMP" + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 43 + "port_to" = 43 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 22 + "port_to" = 22 + "source_ip" = "0.0.0.0" + } + ] +}` + +const testAccCheckOneandoneFirewall_update = ` +resource "oneandone_firewall_policy" "fw" { + name = "%s" + rules = [ + { + "protocol" = "TCP" + "port_from" = 80 + "port_to" = 80 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "ICMP" + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 43 + "port_to" = 43 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 22 + "port_to" = 22 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 88 + "port_to" = 88 + "source_ip" = "0.0.0.0" + }, + ] +}` diff --git a/builtin/providers/oneandone/resource_oneandone_loadbalancer.go b/builtin/providers/oneandone/resource_oneandone_loadbalancer.go new file mode 100644 index 000000000..627ec51df --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_loadbalancer.go @@ -0,0 +1,370 @@ +package oneandone + +import ( + "fmt" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "github.com/hashicorp/terraform/helper/validation" + "log" + "strings" +) + +func resourceOneandOneLoadbalancer() *schema.Resource { + return &schema.Resource{ + Create: resourceOneandOneLoadbalancerCreate, + Read: resourceOneandOneLoadbalancerRead, + Update: resourceOneandOneLoadbalancerUpdate, + Delete: resourceOneandOneLoadbalancerDelete, + Schema: map[string]*schema.Schema{ + + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "method": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validateMethod, + }, + "datacenter": { + Type: schema.TypeString, + Optional: true, + }, + "persistence": { + Type: schema.TypeBool, + Optional: true, + }, + "persistence_time": { + Type: schema.TypeInt, + Optional: true, + }, + "health_check_test": { + Type: schema.TypeString, + Optional: true, + }, + "health_check_interval": { + Type: schema.TypeInt, + Optional: true, + }, + "health_check_path": { + Type: schema.TypeString, + Optional: true, + }, + "health_check_path_parser": { + Type: schema.TypeString, + Optional: true, + }, + "rules": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "protocol": { + Type: schema.TypeString, + Required: true, + }, + "port_balancer": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 65535), + }, + "port_server": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 65535), + }, + "source_ip": { + Type: schema.TypeString, + Required: true, + }, + "id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Required: true, + }, + }, + } +} + +func resourceOneandOneLoadbalancerCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + req := oneandone.LoadBalancerRequest{ + Name: d.Get("name").(string), + Rules: getLBRules(d), + } + + if raw, ok := d.GetOk("description"); ok { + req.Description = raw.(string) + } + + if raw, ok := d.GetOk("datacenter"); ok { + dcs, err := config.API.ListDatacenters() + if err != nil { + return fmt.Errorf("An error occured while fetching list of datacenters %s", err) + } + + decenter := raw.(string) + for _, dc := range dcs { + if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) { + req.DatacenterId = dc.Id + break + } + } + } + + if raw, ok := d.GetOk("method"); ok { + req.Method = raw.(string) + } + + if raw, ok := d.GetOk("persistence"); ok { + req.Persistence = oneandone.Bool2Pointer(raw.(bool)) + } + if raw, ok := d.GetOk("persistence_time"); ok { + req.PersistenceTime = oneandone.Int2Pointer(raw.(int)) + } + + if raw, ok := d.GetOk("health_check_test"); ok { + req.HealthCheckTest = raw.(string) + } + if raw, ok := d.GetOk("health_check_interval"); ok { + req.HealthCheckInterval = oneandone.Int2Pointer(raw.(int)) + } + if raw, ok := d.GetOk("health_check_path"); ok { + req.HealthCheckPath = raw.(string) + } + if raw, ok := d.GetOk("health_check_path_parser"); ok { + req.HealthCheckPathParser = raw.(string) + } + + lb_id, lb, err := config.API.CreateLoadBalancer(&req) + if err != nil { + return err + } + + err = config.API.WaitForState(lb, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + + d.SetId(lb_id) + + return resourceOneandOneLoadbalancerRead(d, meta) +} + +func getLBRules(d *schema.ResourceData) []oneandone.LoadBalancerRule { + var rules []oneandone.LoadBalancerRule + + if raw, ok := d.GetOk("rules"); ok { + rawRules := raw.([]interface{}) + log.Println("[DEBUG] raw rules:", raw) + for _, raw := range rawRules { + rl := raw.(map[string]interface{}) + rule := oneandone.LoadBalancerRule{ + Protocol: rl["protocol"].(string), + } + + if rl["port_balancer"] != nil { + rule.PortBalancer = uint16(rl["port_balancer"].(int)) + } + if rl["port_server"] != nil { + rule.PortServer = uint16(rl["port_server"].(int)) + } + + if rl["source_ip"] != nil { + rule.Source = rl["source_ip"].(string) + } + + rules = append(rules, rule) + } + } + return rules +} + +func resourceOneandOneLoadbalancerUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + if d.HasChange("name") || d.HasChange("description") || d.HasChange("method") || d.HasChange("persistence") || d.HasChange("persistence_time") || d.HasChange("health_check_test") || d.HasChange("health_check_interval") { + lb := oneandone.LoadBalancerRequest{} + if d.HasChange("name") { + _, n := d.GetChange("name") + lb.Name = n.(string) + } + if d.HasChange("description") { + _, n := d.GetChange("description") + lb.Description = n.(string) + } + if d.HasChange("method") { + _, n := d.GetChange("method") + lb.Method = (n.(string)) + } + if d.HasChange("persistence") { + _, n := d.GetChange("persistence") + lb.Persistence = oneandone.Bool2Pointer(n.(bool)) + } + if d.HasChange("persistence_time") { + _, n := d.GetChange("persistence_time") + lb.PersistenceTime = oneandone.Int2Pointer(n.(int)) + } + if d.HasChange("health_check_test") { + _, n := d.GetChange("health_check_test") + lb.HealthCheckTest = n.(string) + } + if d.HasChange("health_check_path") { + _, n := d.GetChange("health_check_path") + lb.HealthCheckPath = n.(string) + } + if d.HasChange("health_check_path_parser") { + _, n := d.GetChange("health_check_path_parser") + lb.HealthCheckPathParser = n.(string) + } + + ss, err := config.API.UpdateLoadBalancer(d.Id(), &lb) + + if err != nil { + return err + } + err = config.API.WaitForState(ss, "ACTIVE", 10, 30) + if err != nil { + return err + } + + } + + if d.HasChange("rules") { + oldR, newR := d.GetChange("rules") + oldValues := oldR.([]interface{}) + newValues := newR.([]interface{}) + if len(oldValues) > len(newValues) { + diff := difference(oldValues, newValues) + for _, old := range diff { + o := old.(map[string]interface{}) + if o["id"] != nil { + old_id := o["id"].(string) + fw, err := config.API.DeleteLoadBalancerRule(d.Id(), old_id) + if err != nil { + return err + } + + err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + } + } + } else { + var rules []oneandone.LoadBalancerRule + log.Println("[DEBUG] new values:", newValues) + + for _, raw := range newValues { + rl := raw.(map[string]interface{}) + log.Println("[DEBUG] rl:", rl) + + if rl["id"].(string) == "" { + rule := oneandone.LoadBalancerRule{ + Protocol: rl["protocol"].(string), + } + + rule.PortServer = uint16(rl["port_server"].(int)) + rule.PortBalancer = uint16(rl["port_balancer"].(int)) + + rule.Source = rl["source_ip"].(string) + + log.Println("[DEBUG] adding to list", rl["protocol"], rl["source_ip"], rl["port_balancer"], rl["port_server"]) + log.Println("[DEBUG] adding to list", rule) + + rules = append(rules, rule) + } + } + + log.Println("[DEBUG] new rules:", rules) + + if len(rules) > 0 { + fw, err := config.API.AddLoadBalancerRules(d.Id(), rules) + if err != nil { + return err + } + + err = config.API.WaitForState(fw, "ACTIVE", 10, config.Retries) + } + } + } + + return resourceOneandOneLoadbalancerRead(d, meta) +} + +func resourceOneandOneLoadbalancerRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + ss, err := config.API.GetLoadBalancer(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + d.Set("name", ss.Name) + d.Set("description", ss.Description) + d.Set("datacenter", ss.Datacenter.CountryCode) + d.Set("method", ss.Method) + d.Set("persistence", ss.Persistence) + d.Set("persistence_time", ss.PersistenceTime) + d.Set("health_check_test", ss.HealthCheckTest) + d.Set("health_check_interval", ss.HealthCheckInterval) + d.Set("rules", getLoadbalancerRules(ss.Rules)) + + return nil +} + +func getLoadbalancerRules(rules []oneandone.LoadBalancerRule) []map[string]interface{} { + raw := make([]map[string]interface{}, 0, len(rules)) + + for _, rule := range rules { + + toadd := map[string]interface{}{ + "id": rule.Id, + "port_balancer": rule.PortBalancer, + "port_server": rule.PortServer, + "protocol": rule.Protocol, + "source_ip": rule.Source, + } + + raw = append(raw, toadd) + } + + return raw + +} + +func resourceOneandOneLoadbalancerDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + lb, err := config.API.DeleteLoadBalancer(d.Id()) + if err != nil { + return err + } + err = config.API.WaitUntilDeleted(lb) + if err != nil { + return err + } + + return nil +} + +func validateMethod(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if value != "ROUND_ROBIN" && value != "LEAST_CONNECTIONS" { + errors = append(errors, fmt.Errorf("%q value sholud be either 'ROUND_ROBIN' or 'LEAST_CONNECTIONS' not %q", k, value)) + } + + return +} diff --git a/builtin/providers/oneandone/resource_oneandone_loadbalancer_test.go b/builtin/providers/oneandone/resource_oneandone_loadbalancer_test.go new file mode 100644 index 000000000..ecd0f9443 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_loadbalancer_test.go @@ -0,0 +1,156 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandoneLoadbalancer_Basic(t *testing.T) { + var lb oneandone.LoadBalancer + + name := "test_loadbalancer" + name_updated := "test_loadbalancer_renamed" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandoneLoadbalancerDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneLoadbalancer_basic, name), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneLoadbalancerExists("oneandone_loadbalancer.lb", &lb), + testAccCheckOneandoneLoadbalancerAttributes("oneandone_loadbalancer.lb", name), + resource.TestCheckResourceAttr("oneandone_loadbalancer.lb", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneLoadbalancer_update, name_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneLoadbalancerExists("oneandone_loadbalancer.lb", &lb), + testAccCheckOneandoneLoadbalancerAttributes("oneandone_loadbalancer.lb", name_updated), + resource.TestCheckResourceAttr("oneandone_loadbalancer.lb", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandoneLoadbalancerDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_loadbalancer.lb" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetLoadBalancer(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("Loadbalancer still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandoneLoadbalancerAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandoneLoadbalancerExists(n string, fw_p *oneandone.LoadBalancer) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_fw, err := api.GetLoadBalancer(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching Loadbalancer: %s", rs.Primary.ID) + } + if found_fw.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + fw_p = found_fw + + return nil + } +} + +const testAccCheckOneandoneLoadbalancer_basic = ` +resource "oneandone_loadbalancer" "lb" { + name = "%s" + method = "ROUND_ROBIN" + persistence = true + persistence_time = 60 + health_check_test = "TCP" + health_check_interval = 300 + datacenter = "US" + rules = [ + { + protocol = "TCP" + port_balancer = 8080 + port_server = 8089 + source_ip = "0.0.0.0" + }, + { + protocol = "TCP" + port_balancer = 9090 + port_server = 9099 + source_ip = "0.0.0.0" + } + ] +}` + +const testAccCheckOneandoneLoadbalancer_update = ` +resource "oneandone_loadbalancer" "lb" { + name = "%s" + method = "ROUND_ROBIN" + persistence = true + persistence_time = 60 + health_check_test = "TCP" + health_check_interval = 300 + datacenter = "US" + rules = [ + { + protocol = "TCP" + port_balancer = 8080 + port_server = 8089 + source_ip = "0.0.0.0" + } + ] +}` diff --git a/builtin/providers/oneandone/resource_oneandone_monitoring_policy.go b/builtin/providers/oneandone/resource_oneandone_monitoring_policy.go new file mode 100644 index 000000000..a6af20dfc --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_monitoring_policy.go @@ -0,0 +1,706 @@ +package oneandone + +import ( + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "strings" +) + +func resourceOneandOneMonitoringPolicy() *schema.Resource { + return &schema.Resource{ + Create: resourceOneandOneMonitoringPolicyCreate, + Read: resourceOneandOneMonitoringPolicyRead, + Update: resourceOneandOneMonitoringPolicyUpdate, + Delete: resourceOneandOneMonitoringPolicyDelete, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "email": { + Type: schema.TypeString, + Optional: true, + }, + "agent": { + Type: schema.TypeBool, + Required: true, + }, + "thresholds": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cpu": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "warning": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + "critical": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + }, + }, + Required: true, + }, + "ram": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "warning": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + "critical": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + }, + }, + Required: true, + }, + "disk": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "warning": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + "critical": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + }, + }, + Required: true, + }, + "transfer": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "warning": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + "critical": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + }, + }, + Required: true, + }, + "internal_ping": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "warning": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + "critical": { + Type: schema.TypeSet, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Required: true, + }, + "alert": { + Type: schema.TypeBool, + Required: true, + }, + }, + }, + Required: true, + }, + }, + }, + Required: true, + }, + }, + }, + Required: true, + }, + "ports": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "email_notification": { + Type: schema.TypeBool, + Required: true, + }, + "port": { + Type: schema.TypeInt, + Required: true, + }, + "protocol": { + Type: schema.TypeString, + Optional: true, + }, + "alert_if": { + Type: schema.TypeString, + Optional: true, + }, + "id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Optional: true, + }, + "processes": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + + "email_notification": { + Type: schema.TypeBool, + Required: true, + }, + "process": { + Type: schema.TypeString, + Required: true, + }, + "alert_if": { + Type: schema.TypeString, + Optional: true, + }, + "id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + Optional: true, + }, + }, + } +} + +func resourceOneandOneMonitoringPolicyCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + mp_request := oneandone.MonitoringPolicy{ + Name: d.Get("name").(string), + Agent: d.Get("agent").(bool), + Thresholds: getThresholds(d.Get("thresholds")), + } + + if raw, ok := d.GetOk("ports"); ok { + mp_request.Ports = getPorts(raw) + } + + if raw, ok := d.GetOk("processes"); ok { + mp_request.Processes = getProcesses(raw) + } + + mp_id, mp, err := config.API.CreateMonitoringPolicy(&mp_request) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + + d.SetId(mp_id) + + return resourceOneandOneMonitoringPolicyRead(d, meta) +} + +func resourceOneandOneMonitoringPolicyUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + req := oneandone.MonitoringPolicy{} + if d.HasChange("name") { + _, n := d.GetChange("name") + req.Name = n.(string) + } + + if d.HasChange("description") { + _, n := d.GetChange("description") + req.Description = n.(string) + } + + if d.HasChange("email") { + _, n := d.GetChange("email") + req.Email = n.(string) + } + + if d.HasChange("agent") { + _, n := d.GetChange("agent") + req.Agent = n.(bool) + } + + if d.HasChange("thresholds") { + _, n := d.GetChange("thresholds") + req.Thresholds = getThresholds(n) + } + + mp, err := config.API.UpdateMonitoringPolicy(d.Id(), &req) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + + if d.HasChange("ports") { + o, n := d.GetChange("ports") + oldValues := o.([]interface{}) + newValues := n.([]interface{}) + + if len(newValues) > len(oldValues) { + ports := getPorts(newValues) + + newports := []oneandone.MonitoringPort{} + + for _, p := range ports { + if p.Id == "" { + newports = append(newports, p) + } + } + + mp, err := config.API.AddMonitoringPolicyPorts(d.Id(), newports) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } else if len(oldValues) > len(newValues) { + diff := difference(oldValues, newValues) + ports := getPorts(diff) + + for _, port := range ports { + if port.Id == "" { + continue + } + + mp, err := config.API.DeleteMonitoringPolicyPort(d.Id(), port.Id) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } else if len(oldValues) == len(newValues) { + ports := getPorts(newValues) + + for _, port := range ports { + mp, err := config.API.ModifyMonitoringPolicyPort(d.Id(), port.Id, &port) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } + } + + if d.HasChange("processes") { + o, n := d.GetChange("processes") + oldValues := o.([]interface{}) + newValues := n.([]interface{}) + if len(newValues) > len(oldValues) { + processes := getProcesses(newValues) + + newprocesses := []oneandone.MonitoringProcess{} + + for _, p := range processes { + if p.Id == "" { + newprocesses = append(newprocesses, p) + } + } + + mp, err := config.API.AddMonitoringPolicyProcesses(d.Id(), newprocesses) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } else if len(oldValues) > len(newValues) { + diff := difference(oldValues, newValues) + processes := getProcesses(diff) + for _, process := range processes { + if process.Id == "" { + continue + } + + mp, err := config.API.DeleteMonitoringPolicyProcess(d.Id(), process.Id) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } else if len(oldValues) == len(newValues) { + processes := getProcesses(newValues) + + for _, process := range processes { + mp, err := config.API.ModifyMonitoringPolicyProcess(d.Id(), process.Id, &process) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } + } + + return resourceOneandOneMonitoringPolicyRead(d, meta) +} + +func resourceOneandOneMonitoringPolicyRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + mp, err := config.API.GetMonitoringPolicy(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + if len(mp.Servers) > 0 { + } + + if len(mp.Ports) > 0 { + pports := d.Get("ports").([]interface{}) + for i, raw_ports := range pports { + port := raw_ports.(map[string]interface{}) + port["id"] = mp.Ports[i].Id + } + d.Set("ports", pports) + } + + if len(mp.Processes) > 0 { + pprocesses := d.Get("processes").([]interface{}) + for i, raw_processes := range pprocesses { + process := raw_processes.(map[string]interface{}) + process["id"] = mp.Processes[i].Id + } + d.Set("processes", pprocesses) + } + + return nil +} + +func resourceOneandOneMonitoringPolicyDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + mp, err := config.API.DeleteMonitoringPolicy(d.Id()) + if err != nil { + return err + } + + err = config.API.WaitUntilDeleted(mp) + if err != nil { + return err + } + + return nil +} + +func getThresholds(d interface{}) *oneandone.MonitoringThreshold { + raw_thresholds := d.(*schema.Set).List() + + toReturn := &oneandone.MonitoringThreshold{} + + for _, thresholds := range raw_thresholds { + th_set := thresholds.(map[string]interface{}) + + //CPU + cpu_raw := th_set["cpu"].(*schema.Set) + toReturn.Cpu = &oneandone.MonitoringLevel{} + for _, c := range cpu_raw.List() { + int_k := c.(map[string]interface{}) + for _, w := range int_k["warning"].(*schema.Set).List() { + toReturn.Cpu.Warning = &oneandone.MonitoringValue{ + Value: w.(map[string]interface{})["value"].(int), + Alert: w.(map[string]interface{})["alert"].(bool), + } + } + + for _, c := range int_k["critical"].(*schema.Set).List() { + toReturn.Cpu.Critical = &oneandone.MonitoringValue{ + Value: c.(map[string]interface{})["value"].(int), + Alert: c.(map[string]interface{})["alert"].(bool), + } + } + } + //RAM + ram_raw := th_set["ram"].(*schema.Set) + toReturn.Ram = &oneandone.MonitoringLevel{} + for _, c := range ram_raw.List() { + int_k := c.(map[string]interface{}) + for _, w := range int_k["warning"].(*schema.Set).List() { + toReturn.Ram.Warning = &oneandone.MonitoringValue{ + Value: w.(map[string]interface{})["value"].(int), + Alert: w.(map[string]interface{})["alert"].(bool), + } + } + + for _, c := range int_k["critical"].(*schema.Set).List() { + toReturn.Ram.Critical = &oneandone.MonitoringValue{ + Value: c.(map[string]interface{})["value"].(int), + Alert: c.(map[string]interface{})["alert"].(bool), + } + } + } + + //DISK + disk_raw := th_set["disk"].(*schema.Set) + toReturn.Disk = &oneandone.MonitoringLevel{} + for _, c := range disk_raw.List() { + int_k := c.(map[string]interface{}) + for _, w := range int_k["warning"].(*schema.Set).List() { + toReturn.Disk.Warning = &oneandone.MonitoringValue{ + Value: w.(map[string]interface{})["value"].(int), + Alert: w.(map[string]interface{})["alert"].(bool), + } + } + + for _, c := range int_k["critical"].(*schema.Set).List() { + toReturn.Disk.Critical = &oneandone.MonitoringValue{ + Value: c.(map[string]interface{})["value"].(int), + Alert: c.(map[string]interface{})["alert"].(bool), + } + } + } + + //TRANSFER + transfer_raw := th_set["transfer"].(*schema.Set) + toReturn.Transfer = &oneandone.MonitoringLevel{} + for _, c := range transfer_raw.List() { + int_k := c.(map[string]interface{}) + for _, w := range int_k["warning"].(*schema.Set).List() { + toReturn.Transfer.Warning = &oneandone.MonitoringValue{ + Value: w.(map[string]interface{})["value"].(int), + Alert: w.(map[string]interface{})["alert"].(bool), + } + } + + for _, c := range int_k["critical"].(*schema.Set).List() { + toReturn.Transfer.Critical = &oneandone.MonitoringValue{ + Value: c.(map[string]interface{})["value"].(int), + Alert: c.(map[string]interface{})["alert"].(bool), + } + } + } + //internal ping + ping_raw := th_set["internal_ping"].(*schema.Set) + toReturn.InternalPing = &oneandone.MonitoringLevel{} + for _, c := range ping_raw.List() { + int_k := c.(map[string]interface{}) + for _, w := range int_k["warning"].(*schema.Set).List() { + toReturn.InternalPing.Warning = &oneandone.MonitoringValue{ + Value: w.(map[string]interface{})["value"].(int), + Alert: w.(map[string]interface{})["alert"].(bool), + } + } + + for _, c := range int_k["critical"].(*schema.Set).List() { + toReturn.InternalPing.Critical = &oneandone.MonitoringValue{ + Value: c.(map[string]interface{})["value"].(int), + Alert: c.(map[string]interface{})["alert"].(bool), + } + } + } + } + + return toReturn +} + +func getProcesses(d interface{}) []oneandone.MonitoringProcess { + toReturn := []oneandone.MonitoringProcess{} + + for _, raw := range d.([]interface{}) { + port := raw.(map[string]interface{}) + m_port := oneandone.MonitoringProcess{ + EmailNotification: port["email_notification"].(bool), + } + + if port["id"] != nil { + m_port.Id = port["id"].(string) + } + + if port["process"] != nil { + m_port.Process = port["process"].(string) + } + + if port["alert_if"] != nil { + m_port.AlertIf = port["alert_if"].(string) + } + + toReturn = append(toReturn, m_port) + } + + return toReturn +} + +func getPorts(d interface{}) []oneandone.MonitoringPort { + toReturn := []oneandone.MonitoringPort{} + + for _, raw := range d.([]interface{}) { + port := raw.(map[string]interface{}) + m_port := oneandone.MonitoringPort{ + EmailNotification: port["email_notification"].(bool), + Port: port["port"].(int), + } + + if port["id"] != nil { + m_port.Id = port["id"].(string) + } + + if port["protocol"] != nil { + m_port.Protocol = port["protocol"].(string) + } + + if port["alert_if"] != nil { + m_port.AlertIf = port["alert_if"].(string) + } + + toReturn = append(toReturn, m_port) + } + + return toReturn +} diff --git a/builtin/providers/oneandone/resource_oneandone_monitoring_policy_test.go b/builtin/providers/oneandone/resource_oneandone_monitoring_policy_test.go new file mode 100644 index 000000000..c6727ee21 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_monitoring_policy_test.go @@ -0,0 +1,212 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandoneMonitoringPolicy_Basic(t *testing.T) { + var mp oneandone.MonitoringPolicy + + name := "test" + name_updated := "test1" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandoneMonitoringPolicyDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneMonitoringPolicy_basic, name), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneMonitoringPolicyExists("oneandone_monitoring_policy.mp", &mp), + testAccCheckOneandoneMonitoringPolicyAttributes("oneandone_monitoring_policy.mp", name), + resource.TestCheckResourceAttr("oneandone_monitoring_policy.mp", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneMonitoringPolicy_basic, name_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneMonitoringPolicyExists("oneandone_monitoring_policy.mp", &mp), + testAccCheckOneandoneMonitoringPolicyAttributes("oneandone_monitoring_policy.mp", name_updated), + resource.TestCheckResourceAttr("oneandone_monitoring_policy.mp", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandoneMonitoringPolicyDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_monitoring_policy.mp" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetMonitoringPolicy(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("MonitoringPolicy still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandoneMonitoringPolicyAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandoneMonitoringPolicyExists(n string, fw_p *oneandone.MonitoringPolicy) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_fw, err := api.GetMonitoringPolicy(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching MonitoringPolicy: %s", rs.Primary.ID) + } + if found_fw.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + fw_p = found_fw + + return nil + } +} + +const testAccCheckOneandoneMonitoringPolicy_basic = ` +resource "oneandone_monitoring_policy" "mp" { + name = "%s" + agent = true + email = "email@address.com" + thresholds = { + cpu = { + warning = { + value = 50, + alert = false + } + critical = { + value = 66, + alert = false + } + } + ram = { + warning = { + value = 70, + alert = true + } + critical = { + value = 80, + alert = true + } + }, + ram = { + warning = { + value = 85, + alert = true + } + critical = { + value = 95, + alert = true + } + }, + disk = { + warning = { + value = 84, + alert = true + } + critical = { + value = 94, + alert = true + } + }, + transfer = { + warning = { + value = 1000, + alert = true + } + critical = { + value = 2000, + alert = true + } + }, + internal_ping = { + warning = { + value = 3000, + alert = true + } + critical = { + value = 4000, + alert = true + } + } + } + ports = [ + { + email_notification = true + port = 443 + protocol = "TCP" + alert_if = "NOT_RESPONDING" + }, + { + email_notification = false + port = 80 + protocol = "TCP" + alert_if = "NOT_RESPONDING" + }, + { + email_notification = true + port = 21 + protocol = "TCP" + alert_if = "NOT_RESPONDING" + } + ] + processes = [ + { + email_notification = false + process = "httpdeamon" + alert_if = "RUNNING" + }, + { + process = "iexplorer", + alert_if = "NOT_RUNNING" + email_notification = true + }] +}` diff --git a/builtin/providers/oneandone/resource_oneandone_private_network.go b/builtin/providers/oneandone/resource_oneandone_private_network.go new file mode 100644 index 000000000..f9a4fc9e3 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_private_network.go @@ -0,0 +1,291 @@ +package oneandone + +import ( + "fmt" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "strings" +) + +func resourceOneandOnePrivateNetwork() *schema.Resource { + return &schema.Resource{ + + Create: resourceOneandOnePrivateNetworkCreate, + Read: resourceOneandOnePrivateNetworkRead, + Update: resourceOneandOnePrivateNetworkUpdate, + Delete: resourceOneandOnePrivateNetworkDelete, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "datacenter": { + Type: schema.TypeString, + Optional: true, + }, + "network_address": { + Type: schema.TypeString, + Optional: true, + }, + "subnet_mask": { + Type: schema.TypeString, + Optional: true, + }, + "server_ids": { + Type: schema.TypeSet, + Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + }, + }, + } +} + +func resourceOneandOnePrivateNetworkCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + req := oneandone.PrivateNetworkRequest{ + Name: d.Get("name").(string), + } + + if raw, ok := d.GetOk("description"); ok { + req.Description = raw.(string) + } + + if raw, ok := d.GetOk("network_address"); ok { + req.NetworkAddress = raw.(string) + } + + if raw, ok := d.GetOk("subnet_mask"); ok { + req.SubnetMask = raw.(string) + } + + if raw, ok := d.GetOk("datacenter"); ok { + dcs, err := config.API.ListDatacenters() + + if err != nil { + return fmt.Errorf("An error occured while fetching list of datacenters %s", err) + + } + + decenter := raw.(string) + for _, dc := range dcs { + if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) { + req.DatacenterId = dc.Id + break + } + } + } + + prn_id, prn, err := config.API.CreatePrivateNetwork(&req) + if err != nil { + return err + } + err = config.API.WaitForState(prn, "ACTIVE", 30, config.Retries) + + if err != nil { + return err + } + + d.SetId(prn_id) + + var ids []string + if raw, ok := d.GetOk("server_ids"); ok { + + rawIps := raw.(*schema.Set).List() + + for _, raw := range rawIps { + ids = append(ids, raw.(string)) + server, err := config.API.ShutdownServer(raw.(string), false) + if err != nil { + return err + } + err = config.API.WaitForState(server, "POWERED_OFF", 10, config.Retries) + if err != nil { + return err + } + + } + } + + prn, err = config.API.AttachPrivateNetworkServers(d.Id(), ids) + if err != nil { + return err + } + + err = config.API.WaitForState(prn, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + + for _, id := range ids { + server, err := config.API.StartServer(id) + if err != nil { + return err + } + + err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries) + if err != nil { + return err + } + } + + return resourceOneandOnePrivateNetworkRead(d, meta) +} + +func resourceOneandOnePrivateNetworkUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + if d.HasChange("name") || d.HasChange("description") || d.HasChange("network_address") || d.HasChange("subnet_mask") { + pnset := oneandone.PrivateNetworkRequest{} + + pnset.Name = d.Get("name").(string) + + pnset.Description = d.Get("description").(string) + pnset.NetworkAddress = d.Get("network_address").(string) + pnset.SubnetMask = d.Get("subnet_mask").(string) + + prn, err := config.API.UpdatePrivateNetwork(d.Id(), &pnset) + + if err != nil { + return err + } + + err = config.API.WaitForState(prn, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + + if d.HasChange("server_ids") { + o, n := d.GetChange("server_ids") + + newValues := n.(*schema.Set).List() + oldValues := o.(*schema.Set).List() + + var ids []string + for _, newV := range oldValues { + ids = append(ids, newV.(string)) + } + for _, id := range ids { + server, err := config.API.ShutdownServer(id, false) + if err != nil { + return err + } + err = config.API.WaitForState(server, "POWERED_OFF", 10, config.Retries) + if err != nil { + return err + } + + _, err = config.API.RemoveServerPrivateNetwork(id, d.Id()) + if err != nil { + return err + } + + prn, _ := config.API.GetPrivateNetwork(d.Id()) + + err = config.API.WaitForState(prn, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + + } + + var newids []string + + for _, newV := range newValues { + newids = append(newids, newV.(string)) + } + pn, err := config.API.AttachPrivateNetworkServers(d.Id(), newids) + + if err != nil { + return err + } + err = config.API.WaitForState(pn, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + + for _, id := range newids { + server, err := config.API.StartServer(id) + if err != nil { + return err + } + + err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries) + if err != nil { + return err + } + } + } + + return resourceOneandOnePrivateNetworkRead(d, meta) +} + +func resourceOneandOnePrivateNetworkRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + pn, err := config.API.GetPrivateNetwork(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + d.Set("name", pn.Name) + d.Set("description", pn.Description) + d.Set("network_address", pn.NetworkAddress) + d.Set("subnet_mask", pn.SubnetMask) + d.Set("datacenter", pn.Datacenter.CountryCode) + + var toAdd []string + for _, s := range pn.Servers { + toAdd = append(toAdd, s.Id) + } + d.Set("server_ids", toAdd) + return nil +} + +func resourceOneandOnePrivateNetworkDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + pn, err := config.API.GetPrivateNetwork(d.Id()) + + for _, server := range pn.Servers { + srv, err := config.API.ShutdownServer(server.Id, false) + if err != nil { + return err + } + err = config.API.WaitForState(srv, "POWERED_OFF", 10, config.Retries) + if err != nil { + return err + } + } + + pn, err = config.API.DeletePrivateNetwork(d.Id()) + if err != nil { + return err + } + + err = config.API.WaitUntilDeleted(pn) + if err != nil { + return err + } + + for _, server := range pn.Servers { + srv, err := config.API.StartServer(server.Id) + if err != nil { + return err + } + err = config.API.WaitForState(srv, "POWERED_ON", 10, config.Retries) + if err != nil { + return err + } + } + + return nil +} diff --git a/builtin/providers/oneandone/resource_oneandone_private_network_test.go b/builtin/providers/oneandone/resource_oneandone_private_network_test.go new file mode 100644 index 000000000..e91da76f2 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_private_network_test.go @@ -0,0 +1,160 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandonePrivateNetwork_Basic(t *testing.T) { + var net oneandone.PrivateNetwork + + name := "test" + name_updated := "test1" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckOneandonePrivateNetworkDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandonePrivateNetwork_basic, name), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandonePrivateNetworkExists("oneandone_private_network.pn", &net), + testAccCheckOneandonePrivateNetworkAttributes("oneandone_private_network.pn", name), + resource.TestCheckResourceAttr("oneandone_private_network.pn", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandonePrivateNetwork_basic, name_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandonePrivateNetworkExists("oneandone_private_network.pn", &net), + testAccCheckOneandonePrivateNetworkAttributes("oneandone_private_network.pn", name_updated), + resource.TestCheckResourceAttr("oneandone_private_network.pn", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckOneandonePrivateNetworkDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_private_network" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetPrivateNetwork(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("PrivateNetwork still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandonePrivateNetworkAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandonePrivateNetworkExists(n string, server *oneandone.PrivateNetwork) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_server, err := api.GetPrivateNetwork(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching PrivateNetwork: %s", rs.Primary.ID) + } + if found_server.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + server = found_server + + return nil + } +} + +const testAccCheckOneandonePrivateNetwork_basic = ` +resource "oneandone_server" "server1" { + name = "server_private_net_01" + description = "ttt" + image = "CoreOS_Stable_64std" + datacenter = "US" + vcores = 1 + cores_per_processor = 1 + ram = 2 + password = "Kv40kd8PQb" + hdds = [ + { + disk_size = 60 + is_main = true + } + ] +} + +resource "oneandone_server" "server2" { + name = "server_private_net_02" + description = "ttt" + image = "CoreOS_Stable_64std" + datacenter = "US" + vcores = 1 + cores_per_processor = 1 + ram = 2 + password = "${oneandone_server.server1.password}" + hdds = [ + { + disk_size = 60 + is_main = true + } + ] +} + +resource "oneandone_private_network" "pn" { + name = "%s", + description = "new private net" + datacenter = "US" + network_address = "192.168.7.0" + subnet_mask = "255.255.255.0" + server_ids = [ + "${oneandone_server.server1.id}", + "${oneandone_server.server2.id}" + ] +} +` diff --git a/builtin/providers/oneandone/resource_oneandone_public_ip.go b/builtin/providers/oneandone/resource_oneandone_public_ip.go new file mode 100644 index 000000000..2c1bec240 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_public_ip.go @@ -0,0 +1,133 @@ +package oneandone + +import ( + "fmt" + "github.com/hashicorp/terraform/helper/schema" + "strings" +) + +func resourceOneandOnePublicIp() *schema.Resource { + return &schema.Resource{ + + Create: resourceOneandOnePublicIpCreate, + Read: resourceOneandOnePublicIpRead, + Update: resourceOneandOnePublicIpUpdate, + Delete: resourceOneandOnePublicIpDelete, + Schema: map[string]*schema.Schema{ + "ip_type": { //IPV4 or IPV6 + Type: schema.TypeString, + Required: true, + }, + "reverse_dns": { + Type: schema.TypeString, + Optional: true, + }, + "datacenter": { + Type: schema.TypeString, + Optional: true, + }, + "ip_address": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceOneandOnePublicIpCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + var reverse_dns string + var datacenter_id string + + if raw, ok := d.GetOk("reverse_dns"); ok { + reverse_dns = raw.(string) + } + + if raw, ok := d.GetOk("datacenter"); ok { + dcs, err := config.API.ListDatacenters() + + if err != nil { + return fmt.Errorf("An error occured while fetching list of datacenters %s", err) + + } + + decenter := raw.(string) + for _, dc := range dcs { + if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) { + datacenter_id = dc.Id + break + } + } + + } + + ip_id, ip, err := config.API.CreatePublicIp(d.Get("ip_type").(string), reverse_dns, datacenter_id) + if err != nil { + return err + } + + err = config.API.WaitForState(ip, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + d.SetId(ip_id) + + return resourceOneandOnePublicIpRead(d, meta) +} + +func resourceOneandOnePublicIpRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + ip, err := config.API.GetPublicIp(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + d.Set("ip_address", ip.IpAddress) + d.Set("revers_dns", ip.ReverseDns) + d.Set("datacenter", ip.Datacenter.CountryCode) + d.Set("ip_type", ip.Type) + + return nil +} + +func resourceOneandOnePublicIpUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + if d.HasChange("reverse_dns") { + _, n := d.GetChange("reverse_dns") + ip, err := config.API.UpdatePublicIp(d.Id(), n.(string)) + if err != nil { + return err + } + + err = config.API.WaitForState(ip, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + } + + return resourceOneandOnePublicIpRead(d, meta) +} + +func resourceOneandOnePublicIpDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + ip, err := config.API.DeletePublicIp(d.Id()) + if err != nil { + return err + } + + err = config.API.WaitUntilDeleted(ip) + if err != nil { + + return err + } + + return nil +} diff --git a/builtin/providers/oneandone/resource_oneandone_public_ip_test.go b/builtin/providers/oneandone/resource_oneandone_public_ip_test.go new file mode 100644 index 000000000..c797dc666 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_public_ip_test.go @@ -0,0 +1,119 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandonePublicIp_Basic(t *testing.T) { + var public_ip oneandone.PublicIp + + reverse_dns := "example.de" + reverse_dns_updated := "example.ba" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandonePublicIpDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandonePublicIp_basic, reverse_dns), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandonePublicIpExists("oneandone_public_ip.ip", &public_ip), + testAccCheckOneandonePublicIpAttributes("oneandone_public_ip.ip", reverse_dns), + resource.TestCheckResourceAttr("oneandone_public_ip.ip", "reverse_dns", reverse_dns), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandonePublicIp_basic, reverse_dns_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandonePublicIpExists("oneandone_public_ip.ip", &public_ip), + testAccCheckOneandonePublicIpAttributes("oneandone_public_ip.ip", reverse_dns_updated), + resource.TestCheckResourceAttr("oneandone_public_ip.ip", "reverse_dns", reverse_dns_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandonePublicIpDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_public_ip" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetPublicIp(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("Public IP still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandonePublicIpAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["reverse_dns"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandonePublicIpExists(n string, public_ip *oneandone.PublicIp) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_public_ip, err := api.GetPublicIp(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching public IP: %s", rs.Primary.ID) + } + if found_public_ip.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + public_ip = found_public_ip + + return nil + } +} + +const testAccCheckOneandonePublicIp_basic = ` +resource "oneandone_public_ip" "ip" { + "ip_type" = "IPV4" + "reverse_dns" = "%s" + "datacenter" = "GB" +}` diff --git a/builtin/providers/oneandone/resource_oneandone_server.go b/builtin/providers/oneandone/resource_oneandone_server.go new file mode 100644 index 000000000..930aba41a --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_server.go @@ -0,0 +1,562 @@ +package oneandone + +import ( + "crypto/x509" + "encoding/pem" + "fmt" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "golang.org/x/crypto/ssh" + "io/ioutil" + "log" + "strings" + + "errors" +) + +func resourceOneandOneServer() *schema.Resource { + return &schema.Resource{ + Create: resourceOneandOneServerCreate, + Read: resourceOneandOneServerRead, + Update: resourceOneandOneServerUpdate, + Delete: resourceOneandOneServerDelete, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "image": { + Type: schema.TypeString, + Required: true, + }, + "vcores": { + Type: schema.TypeInt, + Required: true, + }, + "cores_per_processor": { + Type: schema.TypeInt, + Required: true, + }, + "ram": { + Type: schema.TypeFloat, + Required: true, + }, + "ssh_key_path": { + Type: schema.TypeString, + Optional: true, + }, + "password": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + "datacenter": { + Type: schema.TypeString, + Optional: true, + }, + "ip": { + Type: schema.TypeString, + Optional: true, + }, + "ips": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Computed: true, + }, + "ip": { + Type: schema.TypeString, + Computed: true, + }, + "firewall_policy_id": { + Type: schema.TypeString, + Optional: true, + }, + }, + }, + Computed: true, + }, + "hdds": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Computed: true, + }, + "disk_size": { + Type: schema.TypeInt, + Required: true, + }, + "is_main": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + Required: true, + }, + "firewall_policy_id": { + Type: schema.TypeString, + Optional: true, + }, + "monitoring_policy_id": { + Type: schema.TypeString, + Optional: true, + }, + "loadbalancer_id": { + Type: schema.TypeString, + Optional: true, + }, + }, + } +} + +func resourceOneandOneServerCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + saps, _ := config.API.ListServerAppliances() + + var sa oneandone.ServerAppliance + for _, a := range saps { + + if a.Type == "IMAGE" && strings.Contains(strings.ToLower(a.Name), strings.ToLower(d.Get("image").(string))) { + sa = a + break + } + } + + var hdds []oneandone.Hdd + if raw, ok := d.GetOk("hdds"); ok { + rawhdds := raw.([]interface{}) + + var istheremain bool + for _, raw := range rawhdds { + hd := raw.(map[string]interface{}) + hdd := oneandone.Hdd{ + Size: hd["disk_size"].(int), + IsMain: hd["is_main"].(bool), + } + + if hdd.IsMain { + if hdd.Size < sa.MinHddSize { + return fmt.Errorf(fmt.Sprintf("Minimum required disk size %d", sa.MinHddSize)) + } + istheremain = true + } + + hdds = append(hdds, hdd) + } + + if !istheremain { + return fmt.Errorf("At least one HDD has to be %s", "`is_main`") + } + } + + req := oneandone.ServerRequest{ + Name: d.Get("name").(string), + Description: d.Get("description").(string), + ApplianceId: sa.Id, + PowerOn: true, + Hardware: oneandone.Hardware{ + Vcores: d.Get("vcores").(int), + CoresPerProcessor: d.Get("cores_per_processor").(int), + Ram: float32(d.Get("ram").(float64)), + Hdds: hdds, + }, + } + + if raw, ok := d.GetOk("ip"); ok { + + new_ip := raw.(string) + + ips, err := config.API.ListPublicIps() + if err != nil { + return err + } + + for _, ip := range ips { + if ip.IpAddress == new_ip { + req.IpId = ip.Id + break + } + } + + log.Println("[DEBUG] req.IP", req.IpId) + } + + if raw, ok := d.GetOk("datacenter"); ok { + + dcs, err := config.API.ListDatacenters() + + if err != nil { + return fmt.Errorf("An error occured while fetching list of datacenters %s", err) + + } + + decenter := raw.(string) + for _, dc := range dcs { + if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) { + req.DatacenterId = dc.Id + break + } + } + } + + if fwp_id, ok := d.GetOk("firewall_policy_id"); ok { + req.FirewallPolicyId = fwp_id.(string) + } + + if mp_id, ok := d.GetOk("monitoring_policy_id"); ok { + req.MonitoringPolicyId = mp_id.(string) + } + + if mp_id, ok := d.GetOk("loadbalancer_id"); ok { + req.LoadBalancerId = mp_id.(string) + } + + var privateKey string + if raw, ok := d.GetOk("ssh_key_path"); ok { + rawpath := raw.(string) + + priv, publicKey, err := getSshKey(rawpath) + privateKey = priv + if err != nil { + return err + } + + req.SSHKey = publicKey + } + + var password string + if raw, ok := d.GetOk("password"); ok { + req.Password = raw.(string) + password = req.Password + } + + server_id, server, err := config.API.CreateServer(&req) + if err != nil { + return err + } + + err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries) + + d.SetId(server_id) + server, err = config.API.GetServer(d.Id()) + if err != nil { + return err + } + + if password == "" { + password = server.FirstPassword + } + d.SetConnInfo(map[string]string{ + "type": "ssh", + "host": server.Ips[0].Ip, + "password": password, + "private_key": privateKey, + }) + + return resourceOneandOneServerRead(d, meta) +} + +func resourceOneandOneServerRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + server, err := config.API.GetServer(d.Id()) + + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + d.Set("name", server.Name) + d.Set("datacenter", server.Datacenter.CountryCode) + + d.Set("hdds", readHdds(server.Hardware)) + + d.Set("ips", readIps(server.Ips)) + + if len(server.FirstPassword) > 0 { + d.Set("password", server.FirstPassword) + } + + return nil +} + +func resourceOneandOneServerUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + if d.HasChange("name") || d.HasChange("description") { + _, name := d.GetChange("name") + _, description := d.GetChange("description") + server, err := config.API.RenameServer(d.Id(), name.(string), description.(string)) + if err != nil { + return err + } + + err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries) + + } + + if d.HasChange("hdds") { + oldV, newV := d.GetChange("hdds") + newValues := newV.([]interface{}) + oldValues := oldV.([]interface{}) + + if len(oldValues) > len(newValues) { + diff := difference(oldValues, newValues) + for _, old := range diff { + o := old.(map[string]interface{}) + old_id := o["id"].(string) + server, err := config.API.DeleteServerHdd(d.Id(), old_id) + if err != nil { + return err + } + + err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries) + if err != nil { + return err + } + } + } else { + for _, newHdd := range newValues { + n := newHdd.(map[string]interface{}) + //old := oldHdd.(map[string]interface{}) + + if n["id"].(string) == "" { + hdds := oneandone.ServerHdds{ + Hdds: []oneandone.Hdd{ + { + Size: n["disk_size"].(int), + IsMain: n["is_main"].(bool), + }, + }, + } + + server, err := config.API.AddServerHdds(d.Id(), &hdds) + + if err != nil { + return err + } + err = config.API.WaitForState(server, "POWERED_ON", 10, config.Retries) + if err != nil { + return err + } + } else { + id := n["id"].(string) + isMain := n["is_main"].(bool) + + if id != "" && !isMain { + log.Println("[DEBUG] Resizing existing HDD") + config.API.ResizeServerHdd(d.Id(), id, n["disk_size"].(int)) + } + } + + } + } + } + + if d.HasChange("monitoring_policy_id") { + o, n := d.GetChange("monitoring_policy_id") + + if n == nil { + mp, err := config.API.RemoveMonitoringPolicyServer(o.(string), d.Id()) + + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } else { + mp, err := config.API.AttachMonitoringPolicyServers(n.(string), []string{d.Id()}) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } + + if d.HasChange("loadbalancer_id") { + o, n := d.GetChange("loadbalancer_id") + server, err := config.API.GetServer(d.Id()) + if err != nil { + return err + } + + if n == nil || n.(string) == "" { + log.Println("[DEBUG] Removing") + log.Println("[DEBUG] IPS:", server.Ips) + + for _, ip := range server.Ips { + mp, err := config.API.DeleteLoadBalancerServerIp(o.(string), ip.Id) + + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } else { + log.Println("[DEBUG] Adding") + ip_ids := []string{} + for _, ip := range server.Ips { + ip_ids = append(ip_ids, ip.Id) + } + mp, err := config.API.AddLoadBalancerServerIps(n.(string), ip_ids) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + + } + } + + if d.HasChange("firewall_policy_id") { + server, err := config.API.GetServer(d.Id()) + if err != nil { + return err + } + + o, n := d.GetChange("firewall_policy_id") + if n == nil { + for _, ip := range server.Ips { + mp, err := config.API.DeleteFirewallPolicyServerIp(o.(string), ip.Id) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } else { + ip_ids := []string{} + for _, ip := range server.Ips { + ip_ids = append(ip_ids, ip.Id) + } + + mp, err := config.API.AddFirewallPolicyServerIps(n.(string), ip_ids) + if err != nil { + return err + } + + err = config.API.WaitForState(mp, "ACTIVE", 30, config.Retries) + if err != nil { + return err + } + } + } + + return resourceOneandOneServerRead(d, meta) +} + +func resourceOneandOneServerDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + _, ok := d.GetOk("ip") + + server, err := config.API.DeleteServer(d.Id(), ok) + if err != nil { + return err + } + + err = config.API.WaitUntilDeleted(server) + + if err != nil { + log.Println("[DEBUG] ************ ERROR While waiting ************") + return err + } + return nil +} + +func readHdds(hardware *oneandone.Hardware) []map[string]interface{} { + hdds := make([]map[string]interface{}, 0, len(hardware.Hdds)) + + for _, hd := range hardware.Hdds { + hdds = append(hdds, map[string]interface{}{ + "id": hd.Id, + "disk_size": hd.Size, + "is_main": hd.IsMain, + }) + } + + return hdds +} + +func readIps(ips []oneandone.ServerIp) []map[string]interface{} { + raw := make([]map[string]interface{}, 0, len(ips)) + for _, ip := range ips { + + toadd := map[string]interface{}{ + "ip": ip.Ip, + "id": ip.Id, + } + + if ip.Firewall != nil { + toadd["firewall_policy_id"] = ip.Firewall.Id + } + raw = append(raw, toadd) + } + + return raw +} + +func getSshKey(path string) (privatekey string, publickey string, err error) { + pemBytes, err := ioutil.ReadFile(path) + + if err != nil { + return "", "", err + } + + block, _ := pem.Decode(pemBytes) + + if block == nil { + return "", "", errors.New("File " + path + " contains nothing") + } + + priv, err := x509.ParsePKCS1PrivateKey(block.Bytes) + + if err != nil { + return "", "", err + } + + priv_blk := pem.Block{ + Type: "RSA PRIVATE KEY", + Headers: nil, + Bytes: x509.MarshalPKCS1PrivateKey(priv), + } + + pub, err := ssh.NewPublicKey(&priv.PublicKey) + if err != nil { + return "", "", err + } + publickey = string(ssh.MarshalAuthorizedKey(pub)) + privatekey = string(pem.EncodeToMemory(&priv_blk)) + + return privatekey, publickey, nil +} diff --git a/builtin/providers/oneandone/resource_oneandone_server_test.go b/builtin/providers/oneandone/resource_oneandone_server_test.go new file mode 100644 index 000000000..ed643abfa --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_server_test.go @@ -0,0 +1,130 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandoneServer_Basic(t *testing.T) { + var server oneandone.Server + + name := "test_server" + name_updated := "test_server_renamed" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandoneServerDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneServer_basic, name, name), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneServerExists("oneandone_server.server", &server), + testAccCheckOneandoneServerAttributes("oneandone_server.server", name), + resource.TestCheckResourceAttr("oneandone_server.server", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneServer_basic, name_updated, name_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneServerExists("oneandone_server.server", &server), + testAccCheckOneandoneServerAttributes("oneandone_server.server", name_updated), + resource.TestCheckResourceAttr("oneandone_server.server", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandoneServerDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_server" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetServer(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("Server still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandoneServerAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandoneServerExists(n string, server *oneandone.Server) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_server, err := api.GetServer(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching Server: %s", rs.Primary.ID) + } + if found_server.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + server = found_server + + return nil + } +} + +const testAccCheckOneandoneServer_basic = ` +resource "oneandone_server" "server" { + name = "%s" + description = "%s" + image = "ubuntu" + datacenter = "GB" + vcores = 1 + cores_per_processor = 1 + ram = 2 + password = "Kv40kd8PQb" + hdds = [ + { + disk_size = 20 + is_main = true + } + ] +}` diff --git a/builtin/providers/oneandone/resource_oneandone_vpn.go b/builtin/providers/oneandone/resource_oneandone_vpn.go new file mode 100644 index 000000000..865c3361a --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_vpn.go @@ -0,0 +1,217 @@ +package oneandone + +import ( + "crypto/md5" + "encoding/base64" + "fmt" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "io" + "os" + fp "path/filepath" + "strings" +) + +func resourceOneandOneVPN() *schema.Resource { + return &schema.Resource{ + Create: resourceOneandOneVPNCreate, + Read: resourceOneandOneVPNRead, + Update: resourceOneandOneVPNUpdate, + Delete: resourceOneandOneVPNDelete, + Schema: map[string]*schema.Schema{ + + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "download_path": { + Type: schema.TypeString, + Computed: true, + }, + "datacenter": { + Type: schema.TypeString, + Optional: true, + }, + "file_name": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceOneandOneVPNCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + var datacenter string + + if raw, ok := d.GetOk("datacenter"); ok { + dcs, err := config.API.ListDatacenters() + if err != nil { + return fmt.Errorf("An error occured while fetching list of datacenters %s", err) + } + + decenter := raw.(string) + for _, dc := range dcs { + if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) { + datacenter = dc.Id + break + } + } + } + + var description string + if raw, ok := d.GetOk("description"); ok { + description = raw.(string) + } + + vpn_id, vpn, err := config.API.CreateVPN(d.Get("name").(string), description, datacenter) + if err != nil { + return err + } + + err = config.API.WaitForState(vpn, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + + d.SetId(vpn_id) + + return resourceOneandOneVPNRead(d, meta) +} + +func resourceOneandOneVPNUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + if d.HasChange("name") || d.HasChange("description") { + + vpn, err := config.API.ModifyVPN(d.Id(), d.Get("name").(string), d.Get("description").(string)) + if err != nil { + return err + } + + err = config.API.WaitForState(vpn, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + } + + return resourceOneandOneVPNRead(d, meta) +} + +func resourceOneandOneVPNRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + vpn, err := config.API.GetVPN(d.Id()) + + base64_str, err := config.API.GetVPNConfigFile(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + var download_path string + if raw, ok := d.GetOk("download_path"); ok { + download_path = raw.(string) + } + + path, fileName, err := writeCofnig(vpn, download_path, base64_str) + if err != nil { + return err + } + + d.Set("name", vpn.Name) + d.Set("description", vpn.Description) + d.Set("download_path", path) + d.Set("file_name", fileName) + d.Set("datacenter", vpn.Datacenter.CountryCode) + + return nil +} + +func writeCofnig(vpn *oneandone.VPN, path, base64config string) (string, string, error) { + data, err := base64.StdEncoding.DecodeString(base64config) + if err != nil { + return "", "", err + } + + var fileName string + if vpn.CloudPanelId != "" { + fileName = vpn.CloudPanelId + ".zip" + } else { + fileName = "vpn_" + fmt.Sprintf("%x", md5.Sum(data)) + ".zip" + } + + if path == "" { + path, err = os.Getwd() + if err != nil { + return "", "", err + } + } + + if !fp.IsAbs(path) { + path, err = fp.Abs(path) + if err != nil { + return "", "", err + } + } + + _, err = os.Stat(path) + if err != nil { + if os.IsNotExist(err) { + // make all dirs + os.MkdirAll(path, 0666) + } else { + return "", "", err + } + } + + fpath := fp.Join(path, fileName) + + f, err := os.OpenFile(fpath, os.O_CREATE|os.O_WRONLY, 0666) + defer f.Close() + + if err != nil { + return "", "", err + } + + n, err := f.Write(data) + if err == nil && n < len(data) { + err = io.ErrShortWrite + } + + if err != nil { + return "", "", err + } + + return path, fileName, nil + +} + +func resourceOneandOneVPNDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + vpn, err := config.API.DeleteVPN(d.Id()) + if err != nil { + return err + } + + err = config.API.WaitUntilDeleted(vpn) + if err != nil { + return err + } + + fullPath := fp.Join(d.Get("download_path").(string), d.Get("file_name").(string)) + if _, err := os.Stat(fullPath); !os.IsNotExist(err) { + os.Remove(fullPath) + } + + return nil +} diff --git a/builtin/providers/oneandone/resource_oneandone_vpn_test.go b/builtin/providers/oneandone/resource_oneandone_vpn_test.go new file mode 100644 index 000000000..94e84bb61 --- /dev/null +++ b/builtin/providers/oneandone/resource_oneandone_vpn_test.go @@ -0,0 +1,119 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandoneVpn_Basic(t *testing.T) { + var server oneandone.VPN + + name := "test" + name_updated := "test1" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandoneVPNDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneVPN_basic, name), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneVPNExists("oneandone_vpn.vpn", &server), + testAccCheckOneandoneVPNAttributes("oneandone_vpn.vpn", name), + resource.TestCheckResourceAttr("oneandone_vpn.vpn", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneVPN_basic, name_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneVPNExists("oneandone_vpn.vpn", &server), + testAccCheckOneandoneVPNAttributes("oneandone_vpn.vpn", name_updated), + resource.TestCheckResourceAttr("oneandone_vpn.vpn", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandoneVPNDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_server" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetVPN(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("VPN still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandoneVPNAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandoneVPNExists(n string, server *oneandone.VPN) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_server, err := api.GetVPN(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching VPN: %s", rs.Primary.ID) + } + if found_server.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + server = found_server + + return nil + } +} + +const testAccCheckOneandoneVPN_basic = ` +resource "oneandone_vpn" "vpn" { + datacenter = "GB" + name = "%s" + description = "ttest descr" +}` diff --git a/builtin/providers/oneandone/resources_oneandone_shared_storage.go b/builtin/providers/oneandone/resources_oneandone_shared_storage.go new file mode 100644 index 000000000..f690e0cf6 --- /dev/null +++ b/builtin/providers/oneandone/resources_oneandone_shared_storage.go @@ -0,0 +1,256 @@ +package oneandone + +import ( + "fmt" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/schema" + "strings" +) + +func resourceOneandOneSharedStorage() *schema.Resource { + return &schema.Resource{ + Create: resourceOneandOneSharedStorageCreate, + Read: resourceOneandOneSharedStorageRead, + Update: resourceOneandOneSharedStorageUpdate, + Delete: resourceOneandOneSharedStorageDelete, + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + }, + "size": { + Type: schema.TypeInt, + Required: true, + }, + "datacenter": { + Type: schema.TypeString, + Required: true, + }, + "storage_servers": { + Type: schema.TypeList, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "id": { + Type: schema.TypeString, + Required: true, + }, + "rights": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + Optional: true, + }, + }, + } +} + +func resourceOneandOneSharedStorageCreate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + req := oneandone.SharedStorageRequest{ + Name: d.Get("name").(string), + Size: oneandone.Int2Pointer(d.Get("size").(int)), + } + + if raw, ok := d.GetOk("description"); ok { + req.Description = raw.(string) + + } + + if raw, ok := d.GetOk("datacenter"); ok { + dcs, err := config.API.ListDatacenters() + + if err != nil { + return fmt.Errorf("An error occured while fetching list of datacenters %s", err) + + } + + decenter := raw.(string) + for _, dc := range dcs { + if strings.ToLower(dc.CountryCode) == strings.ToLower(decenter) { + req.DatacenterId = dc.Id + break + } + } + } + + ss_id, ss, err := config.API.CreateSharedStorage(&req) + if err != nil { + return err + } + + err = config.API.WaitForState(ss, "ACTIVE", 10, config.Retries) + if err != nil { + return err + } + d.SetId(ss_id) + + if raw, ok := d.GetOk("storage_servers"); ok { + + storage_servers := []oneandone.SharedStorageServer{} + + rawRights := raw.([]interface{}) + for _, raws_ss := range rawRights { + ss := raws_ss.(map[string]interface{}) + storage_server := oneandone.SharedStorageServer{ + Id: ss["id"].(string), + Rights: ss["rights"].(string), + } + storage_servers = append(storage_servers, storage_server) + } + + ss, err := config.API.AddSharedStorageServers(ss_id, storage_servers) + + if err != nil { + return err + } + + err = config.API.WaitForState(ss, "ACTIVE", 10, 30) + if err != nil { + return err + } + } + + return resourceOneandOneSharedStorageRead(d, meta) +} + +func resourceOneandOneSharedStorageUpdate(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + if d.HasChange("name") || d.HasChange("description") || d.HasChange("size") { + ssu := oneandone.SharedStorageRequest{} + if d.HasChange("name") { + _, n := d.GetChange("name") + ssu.Name = n.(string) + } + if d.HasChange("description") { + _, n := d.GetChange("description") + ssu.Description = n.(string) + } + if d.HasChange("size") { + _, n := d.GetChange("size") + ssu.Size = oneandone.Int2Pointer(n.(int)) + } + + ss, err := config.API.UpdateSharedStorage(d.Id(), &ssu) + + if err != nil { + return err + } + err = config.API.WaitForState(ss, "ACTIVE", 10, 30) + if err != nil { + return err + } + + } + + if d.HasChange("storage_servers") { + + o, n := d.GetChange("storage_servers") + + oldV := o.([]interface{}) + + for _, old := range oldV { + ol := old.(map[string]interface{}) + + ss, err := config.API.DeleteSharedStorageServer(d.Id(), ol["id"].(string)) + if err != nil { + return err + } + + err = config.API.WaitForState(ss, "ACTIVE", 10, config.Retries) + + if err != nil { + return err + } + + } + + newV := n.([]interface{}) + + ids := []oneandone.SharedStorageServer{} + for _, newValue := range newV { + nn := newValue.(map[string]interface{}) + ids = append(ids, oneandone.SharedStorageServer{ + Id: nn["id"].(string), + Rights: nn["rights"].(string), + }) + } + + if len(ids) > 0 { + ss, err := config.API.AddSharedStorageServers(d.Id(), ids) + if err != nil { + return err + } + + err = config.API.WaitForState(ss, "ACTIVE", 10, config.Retries) + + if err != nil { + return err + } + } + + //DeleteSharedStorageServer + + } + + return resourceOneandOneSharedStorageRead(d, meta) +} + +func resourceOneandOneSharedStorageRead(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + ss, err := config.API.GetSharedStorage(d.Id()) + if err != nil { + if strings.Contains(err.Error(), "404") { + d.SetId("") + return nil + } + return err + } + + d.Set("name", ss.Name) + d.Set("description", ss.Description) + d.Set("size", ss.Size) + d.Set("datacenter", ss.Datacenter.CountryCode) + d.Set("storage_servers", getStorageServers(ss.Servers)) + + return nil +} + +func getStorageServers(servers []oneandone.SharedStorageServer) []map[string]interface{} { + raw := make([]map[string]interface{}, 0, len(servers)) + + for _, server := range servers { + + toadd := map[string]interface{}{ + "id": server.Id, + "rights": server.Rights, + } + + raw = append(raw, toadd) + } + + return raw + +} +func resourceOneandOneSharedStorageDelete(d *schema.ResourceData, meta interface{}) error { + config := meta.(*Config) + + ss, err := config.API.DeleteSharedStorage(d.Id()) + if err != nil { + return err + } + err = config.API.WaitUntilDeleted(ss) + if err != nil { + return err + } + + return nil +} diff --git a/builtin/providers/oneandone/resources_oneandone_shared_storage_test.go b/builtin/providers/oneandone/resources_oneandone_shared_storage_test.go new file mode 100644 index 000000000..dcc07302a --- /dev/null +++ b/builtin/providers/oneandone/resources_oneandone_shared_storage_test.go @@ -0,0 +1,120 @@ +package oneandone + +import ( + "fmt" + "testing" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/terraform/helper/resource" + "github.com/hashicorp/terraform/terraform" + "os" + "time" +) + +func TestAccOneandoneSharedStorage_Basic(t *testing.T) { + var storage oneandone.SharedStorage + + name := "test_storage" + name_updated := "test1" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + }, + Providers: testAccProviders, + CheckDestroy: testAccCheckDOneandoneSharedStorageDestroyCheck, + Steps: []resource.TestStep{ + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneSharedStorage_basic, name), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneSharedStorageExists("oneandone_shared_storage.storage", &storage), + testAccCheckOneandoneSharedStorageAttributes("oneandone_shared_storage.storage", name), + resource.TestCheckResourceAttr("oneandone_shared_storage.storage", "name", name), + ), + }, + resource.TestStep{ + Config: fmt.Sprintf(testAccCheckOneandoneSharedStorage_basic, name_updated), + Check: resource.ComposeTestCheckFunc( + func(*terraform.State) error { + time.Sleep(10 * time.Second) + return nil + }, + testAccCheckOneandoneSharedStorageExists("oneandone_shared_storage.storage", &storage), + testAccCheckOneandoneSharedStorageAttributes("oneandone_shared_storage.storage", name_updated), + resource.TestCheckResourceAttr("oneandone_shared_storage.storage", "name", name_updated), + ), + }, + }, + }) +} + +func testAccCheckDOneandoneSharedStorageDestroyCheck(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "oneandone_shared_storage" { + continue + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + _, err := api.GetVPN(rs.Primary.ID) + + if err == nil { + return fmt.Errorf("VPN still exists %s %s", rs.Primary.ID, err.Error()) + } + } + + return nil +} +func testAccCheckOneandoneSharedStorageAttributes(n string, reverse_dns string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + if rs.Primary.Attributes["name"] != reverse_dns { + return fmt.Errorf("Bad name: expected %s : found %s ", reverse_dns, rs.Primary.Attributes["name"]) + } + + return nil + } +} + +func testAccCheckOneandoneSharedStorageExists(n string, storage *oneandone.SharedStorage) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Record ID is set") + } + + api := oneandone.New(os.Getenv("ONEANDONE_TOKEN"), oneandone.BaseUrl) + + found_storage, err := api.GetSharedStorage(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("Error occured while fetching SharedStorage: %s", rs.Primary.ID) + } + if found_storage.Id != rs.Primary.ID { + return fmt.Errorf("Record not found") + } + storage = found_storage + + return nil + } +} + +const testAccCheckOneandoneSharedStorage_basic = ` +resource "oneandone_shared_storage" "storage" { + name = "%s" + description = "ttt" + size = 50 + datacenter = "GB" +}` diff --git a/builtin/providers/opc/resource_instance.go b/builtin/providers/opc/resource_instance.go index 686ff7b0a..7e413b13f 100644 --- a/builtin/providers/opc/resource_instance.go +++ b/builtin/providers/opc/resource_instance.go @@ -80,6 +80,7 @@ func resourceInstance() *schema.Resource { "label": { Type: schema.TypeString, Optional: true, + Computed: true, ForceNew: true, }, diff --git a/builtin/providers/opc/resource_instance_test.go b/builtin/providers/opc/resource_instance_test.go index a29f08c8d..d2371110f 100644 --- a/builtin/providers/opc/resource_instance_test.go +++ b/builtin/providers/opc/resource_instance_test.go @@ -136,6 +136,27 @@ func TestAccOPCInstance_storage(t *testing.T) { }) } +func TestAccOPCInstance_emptyLabel(t *testing.T) { + resName := "opc_compute_instance.test" + rInt := acctest.RandInt() + + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccOPCCheckInstanceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceEmptyLabel(rInt), + Check: resource.ComposeTestCheckFunc( + testAccOPCCheckInstanceExists, + resource.TestCheckResourceAttr(resName, "name", fmt.Sprintf("acc-test-instance-%d", rInt)), + resource.TestCheckResourceAttrSet(resName, "label"), + ), + }, + }, + }) +} + func testAccOPCCheckInstanceExists(s *terraform.State) error { client := testAccProvider.Meta().(*compute.Client).Instances() @@ -271,3 +292,17 @@ resource "opc_compute_instance" "test" { } }`, rInt, rInt, rInt) } + +func testAccInstanceEmptyLabel(rInt int) string { + return fmt.Sprintf(` +resource "opc_compute_instance" "test" { + name = "acc-test-instance-%d" + shape = "oc3" + image_list = "/oracle/public/oel_6.7_apaas_16.4.5_1610211300" + instance_attributes = <> .bashrc +gimme 1.8 >> .bashrc mkdir ~/go -eval "$(/usr/local/bin/gimme 1.6)" +eval "$(/usr/local/bin/gimme 1.8)" echo 'export GOPATH=$HOME/go' >> .bashrc export GOPATH=$HOME/go @@ -24,3 +24,9 @@ source .bashrc go get -u github.com/kardianos/govendor go get github.com/hashicorp/terraform + +cat < ~/rabbitmqrc +export RABBITMQ_ENDPOINT="http://127.0.0.1:15672" +export RABBITMQ_USERNAME="guest" +export RABBITMQ_PASSWORD="guest" +EOF diff --git a/builtin/providers/rabbitmq/resource_binding_test.go b/builtin/providers/rabbitmq/resource_binding_test.go index 8b710f98c..ccd9d646c 100644 --- a/builtin/providers/rabbitmq/resource_binding_test.go +++ b/builtin/providers/rabbitmq/resource_binding_test.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform/terraform" ) -func TestAccBinding(t *testing.T) { +func TestAccBinding_basic(t *testing.T) { var bindingInfo rabbithole.BindingInfo resource.Test(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -28,6 +28,23 @@ func TestAccBinding(t *testing.T) { }) } +func TestAccBinding_propertiesKey(t *testing.T) { + var bindingInfo rabbithole.BindingInfo + resource.Test(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Providers: testAccProviders, + CheckDestroy: testAccBindingCheckDestroy(bindingInfo), + Steps: []resource.TestStep{ + resource.TestStep{ + Config: testAccBindingConfig_propertiesKey, + Check: testAccBindingCheck( + "rabbitmq_binding.test", &bindingInfo, + ), + }, + }, + }) +} + func testAccBindingCheck(rn string, bindingInfo *rabbithole.BindingInfo) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[rn] @@ -119,3 +136,47 @@ resource "rabbitmq_binding" "test" { routing_key = "#" properties_key = "%23" }` + +const testAccBindingConfig_propertiesKey = ` +resource "rabbitmq_vhost" "test" { + name = "test" +} + +resource "rabbitmq_permissions" "guest" { + user = "guest" + vhost = "${rabbitmq_vhost.test.name}" + permissions { + configure = ".*" + write = ".*" + read = ".*" + } +} + +resource "rabbitmq_exchange" "test" { + name = "Test" + vhost = "${rabbitmq_permissions.guest.vhost}" + settings { + type = "topic" + durable = true + auto_delete = false + } +} + +resource "rabbitmq_queue" "test" { + name = "Test.Queue" + vhost = "${rabbitmq_permissions.guest.vhost}" + settings { + durable = true + auto_delete = false + } +} + +resource "rabbitmq_binding" "test" { + source = "${rabbitmq_exchange.test.name}" + vhost = "${rabbitmq_vhost.test.name}" + destination = "${rabbitmq_queue.test.name}" + destination_type = "queue" + routing_key = "ANYTHING.#" + properties_key = "ANYTHING.%23" +} +` diff --git a/builtin/providers/rancher/resource_rancher_stack.go b/builtin/providers/rancher/resource_rancher_stack.go index e8a1b1052..c928cd1f2 100644 --- a/builtin/providers/rancher/resource_rancher_stack.go +++ b/builtin/providers/rancher/resource_rancher_stack.go @@ -76,14 +76,12 @@ func resourceRancherStack() *schema.Resource { Optional: true, }, "rendered_docker_compose": { - Type: schema.TypeString, - Computed: true, - DiffSuppressFunc: suppressComposeDiff, + Type: schema.TypeString, + Computed: true, }, "rendered_rancher_compose": { - Type: schema.TypeString, - Computed: true, - DiffSuppressFunc: suppressComposeDiff, + Type: schema.TypeString, + Computed: true, }, }, } diff --git a/builtin/providers/template/datasource_cloudinit_config.go b/builtin/providers/template/datasource_cloudinit_config.go index 9fee9fb49..8a24c329a 100644 --- a/builtin/providers/template/datasource_cloudinit_config.go +++ b/builtin/providers/template/datasource_cloudinit_config.go @@ -140,7 +140,7 @@ func renderPartsToWriter(parts cloudInitParts, writer io.Writer) error { } writer.Write([]byte(fmt.Sprintf("Content-Type: multipart/mixed; boundary=\"%s\"\n", mimeWriter.Boundary()))) - writer.Write([]byte("MIME-Version: 1.0\r\n")) + writer.Write([]byte("MIME-Version: 1.0\r\n\r\n")) for _, part := range parts { header := textproto.MIMEHeader{} diff --git a/builtin/providers/template/datasource_cloudinit_config_test.go b/builtin/providers/template/datasource_cloudinit_config_test.go index 80f37348f..7ea49ca7d 100644 --- a/builtin/providers/template/datasource_cloudinit_config_test.go +++ b/builtin/providers/template/datasource_cloudinit_config_test.go @@ -22,7 +22,7 @@ func TestRender(t *testing.T) { content = "baz" } }`, - "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n", + "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n", }, { `data "template_cloudinit_config" "foo" { @@ -35,7 +35,7 @@ func TestRender(t *testing.T) { filename = "foobar.sh" } }`, - "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDARY\r\nContent-Disposition: attachment; filename=\"foobar.sh\"\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n", + "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n\r\n--MIMEBOUNDARY\r\nContent-Disposition: attachment; filename=\"foobar.sh\"\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY--\r\n", }, { `data "template_cloudinit_config" "foo" { @@ -51,7 +51,7 @@ func TestRender(t *testing.T) { content = "ffbaz" } }`, - "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDARY--\r\n", + "Content-Type: multipart/mixed; boundary=\"MIMEBOUNDARY\"\nMIME-Version: 1.0\r\n\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDARY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDARY--\r\n", }, } diff --git a/command/init.go b/command/init.go index 6b7fbaf21..54a244cdd 100644 --- a/command/init.go +++ b/command/init.go @@ -30,6 +30,7 @@ func (c *InitCommand) Run(args []string) int { cmdFlags.BoolVar(&c.forceInitCopy, "force-copy", false, "suppress prompts about copying state data") cmdFlags.BoolVar(&c.Meta.stateLock, "lock", true, "lock state") cmdFlags.DurationVar(&c.Meta.stateLockTimeout, "lock-timeout", 0, "lock timeout") + cmdFlags.BoolVar(&c.reconfigure, "reconfigure", false, "reconfigure") cmdFlags.Usage = func() { c.Ui.Error(c.Help()) } if err := cmdFlags.Parse(args); err != nil { @@ -223,6 +224,10 @@ Options: times. The backend type must be in the configuration itself. + -force-copy Suppress prompts about copying state data. This is + equivalent to providing a "yes" to all confirmation + prompts. + -get=true Download any modules for this configuration. -input=true Ask for input if necessary. If false, will error if @@ -234,9 +239,7 @@ Options: -no-color If specified, output won't contain any color. - -force-copy Suppress prompts about copying state data. This is - equivalent to providing a "yes" to all confirmation - prompts. + -reconfigure Reconfigure the backend, ignoring any saved configuration. ` return strings.TrimSpace(helpText) } diff --git a/command/internal_plugin_list.go b/command/internal_plugin_list.go index 9e53c16b9..0092d8f88 100644 --- a/command/internal_plugin_list.go +++ b/command/internal_plugin_list.go @@ -47,6 +47,7 @@ import ( nomadprovider "github.com/hashicorp/terraform/builtin/providers/nomad" ns1provider "github.com/hashicorp/terraform/builtin/providers/ns1" nullprovider "github.com/hashicorp/terraform/builtin/providers/null" + oneandoneprovider "github.com/hashicorp/terraform/builtin/providers/oneandone" opcprovider "github.com/hashicorp/terraform/builtin/providers/opc" openstackprovider "github.com/hashicorp/terraform/builtin/providers/openstack" opsgenieprovider "github.com/hashicorp/terraform/builtin/providers/opsgenie" @@ -125,6 +126,7 @@ var InternalProviders = map[string]plugin.ProviderFunc{ "nomad": nomadprovider.Provider, "ns1": ns1provider.Provider, "null": nullprovider.Provider, + "oneandone": oneandoneprovider.Provider, "opc": opcprovider.Provider, "openstack": openstackprovider.Provider, "opsgenie": opsgenieprovider.Provider, diff --git a/command/meta.go b/command/meta.go index c494d9697..0b9375f72 100644 --- a/command/meta.go +++ b/command/meta.go @@ -95,6 +95,8 @@ type Meta struct { // // forceInitCopy suppresses confirmation for copying state data during // init. + // + // reconfigure forces init to ignore any stored configuration. statePath string stateOutPath string backupPath string @@ -104,6 +106,7 @@ type Meta struct { stateLock bool stateLockTimeout time.Duration forceInitCopy bool + reconfigure bool } // initStatePaths is used to initialize the default values for diff --git a/command/meta_backend.go b/command/meta_backend.go index 415efa02c..6f75acc77 100644 --- a/command/meta_backend.go +++ b/command/meta_backend.go @@ -352,6 +352,13 @@ func (m *Meta) backendFromConfig(opts *BackendOpts) (backend.Backend, error) { s = terraform.NewState() } + // if we want to force reconfiguration of the backend, we set the backend + // state to nil on this copy. This will direct us through the correct + // configuration path in the switch statement below. + if m.reconfigure { + s.Backend = nil + } + // Upon return, we want to set the state we're using in-memory so that // we can access it for commands. m.backendState = nil diff --git a/command/meta_backend_test.go b/command/meta_backend_test.go index ffb6e3b61..aa4a02d2c 100644 --- a/command/meta_backend_test.go +++ b/command/meta_backend_test.go @@ -983,6 +983,59 @@ func TestMetaBackend_configuredChange(t *testing.T) { } } +// Reconfiguring with an already configured backend. +// This should ignore the existing backend config, and configure the new +// backend is if this is the first time. +func TestMetaBackend_reconfigureChange(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + copy.CopyDir(testFixturePath("backend-change-single-to-single"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // Register the single-state backend + backendinit.Set("local-single", backendlocal.TestNewLocalSingle) + defer backendinit.Set("local-single", nil) + + // Setup the meta + m := testMetaBackend(t, nil) + + // this should not ask for input + m.input = false + + // cli flag -reconfigure + m.reconfigure = true + + // Get the backend + b, err := m.Backend(&BackendOpts{Init: true}) + if err != nil { + t.Fatalf("bad: %s", err) + } + + // Check the state + s, err := b.State(backend.DefaultStateName) + if err != nil { + t.Fatalf("bad: %s", err) + } + if err := s.RefreshState(); err != nil { + t.Fatalf("bad: %s", err) + } + newState := s.State() + if newState != nil || !newState.Empty() { + t.Fatal("state should be nil/empty after forced reconfiguration") + } + + // verify that the old state is still there + s = (&state.LocalState{Path: "local-state.tfstate"}) + if err := s.RefreshState(); err != nil { + t.Fatal(err) + } + oldState := s.State() + if oldState == nil || oldState.Empty() { + t.Fatal("original state should be untouched") + } +} + // Changing a configured backend, copying state func TestMetaBackend_configuredChangeCopy(t *testing.T) { // Create a temporary working directory that is empty diff --git a/config/interpolate_funcs.go b/config/interpolate_funcs.go index bc2c49f45..b79334718 100644 --- a/config/interpolate_funcs.go +++ b/config/interpolate_funcs.go @@ -63,12 +63,14 @@ func Funcs() map[string]ast.Function { "cidrnetmask": interpolationFuncCidrNetmask(), "cidrsubnet": interpolationFuncCidrSubnet(), "coalesce": interpolationFuncCoalesce(), + "coalescelist": interpolationFuncCoalesceList(), "compact": interpolationFuncCompact(), "concat": interpolationFuncConcat(), "dirname": interpolationFuncDirname(), "distinct": interpolationFuncDistinct(), "element": interpolationFuncElement(), "file": interpolationFuncFile(), + "matchkeys": interpolationFuncMatchKeys(), "floor": interpolationFuncFloor(), "format": interpolationFuncFormat(), "formatlist": interpolationFuncFormatList(), @@ -323,6 +325,30 @@ func interpolationFuncCoalesce() ast.Function { } } +// interpolationFuncCoalesceList implements the "coalescelist" function that +// returns the first non empty list from the provided input +func interpolationFuncCoalesceList() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeList}, + ReturnType: ast.TypeList, + Variadic: true, + VariadicType: ast.TypeList, + Callback: func(args []interface{}) (interface{}, error) { + if len(args) < 2 { + return nil, fmt.Errorf("must provide at least two arguments") + } + for _, arg := range args { + argument := arg.([]ast.Variable) + + if len(argument) > 0 { + return argument, nil + } + } + return make([]ast.Variable, 0), nil + }, + } +} + // interpolationFuncConcat implements the "concat" function that concatenates // multiple lists. func interpolationFuncConcat() ast.Function { @@ -668,6 +694,57 @@ func appendIfMissing(slice []string, element string) []string { return append(slice, element) } +// for two lists `keys` and `values` of equal length, returns all elements +// from `values` where the corresponding element from `keys` is in `searchset`. +func interpolationFuncMatchKeys() ast.Function { + return ast.Function{ + ArgTypes: []ast.Type{ast.TypeList, ast.TypeList, ast.TypeList}, + ReturnType: ast.TypeList, + Callback: func(args []interface{}) (interface{}, error) { + output := make([]ast.Variable, 0) + + values, _ := args[0].([]ast.Variable) + keys, _ := args[1].([]ast.Variable) + searchset, _ := args[2].([]ast.Variable) + + if len(keys) != len(values) { + return nil, fmt.Errorf("length of keys and values should be equal") + } + + for i, key := range keys { + for _, search := range searchset { + if res, err := compareSimpleVariables(key, search); err != nil { + return nil, err + } else if res == true { + output = append(output, values[i]) + break + } + } + } + // if searchset is empty, then output is an empty list as well. + // if we haven't matched any key, then output is an empty list. + return output, nil + }, + } +} + +// compare two variables of the same type, i.e. non complex one, such as TypeList or TypeMap +func compareSimpleVariables(a, b ast.Variable) (bool, error) { + if a.Type != b.Type { + return false, fmt.Errorf( + "won't compare items of different types %s and %s", + a.Type.Printable(), b.Type.Printable()) + } + switch a.Type { + case ast.TypeString: + return a.Value.(string) == b.Value.(string), nil + default: + return false, fmt.Errorf( + "can't compare items of type %s", + a.Type.Printable()) + } +} + // interpolationFuncJoin implements the "join" function that allows // multi-variable values to be joined by some character. func interpolationFuncJoin() ast.Function { diff --git a/config/interpolate_funcs_test.go b/config/interpolate_funcs_test.go index 801be6dbb..57e59afa6 100644 --- a/config/interpolate_funcs_test.go +++ b/config/interpolate_funcs_test.go @@ -684,6 +684,33 @@ func TestInterpolateFuncCoalesce(t *testing.T) { }) } +func TestInterpolateFuncCoalesceList(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + { + `${coalescelist(list("first"), list("second"), list("third"))}`, + []interface{}{"first"}, + false, + }, + { + `${coalescelist(list(), list("second"), list("third"))}`, + []interface{}{"second"}, + false, + }, + { + `${coalescelist(list(), list(), list())}`, + []interface{}{}, + false, + }, + { + `${coalescelist(list("foo"))}`, + nil, + true, + }, + }, + }) +} + func TestInterpolateFuncConcat(t *testing.T) { testFunction(t, testFunctionConfig{ Cases: []testFunctionCase{ @@ -964,6 +991,74 @@ func TestInterpolateFuncDistinct(t *testing.T) { }) } +func TestInterpolateFuncMatchKeys(t *testing.T) { + testFunction(t, testFunctionConfig{ + Cases: []testFunctionCase{ + // normal usage + { + `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2"))}`, + []interface{}{"b"}, + false, + }, + // normal usage 2, check the order + { + `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2", "ref1"))}`, + []interface{}{"a", "b"}, + false, + }, + // duplicate item in searchset + { + `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref2", "ref2"))}`, + []interface{}{"b"}, + false, + }, + // no matches + { + `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list("ref4"))}`, + []interface{}{}, + false, + }, + // no matches 2 + { + `${matchkeys(list("a", "b", "c"), list("ref1", "ref2", "ref3"), list())}`, + []interface{}{}, + false, + }, + // zero case + { + `${matchkeys(list(), list(), list("nope"))}`, + []interface{}{}, + false, + }, + // complex values + { + `${matchkeys(list(list("a", "a")), list("a"), list("a"))}`, + []interface{}{[]interface{}{"a", "a"}}, + false, + }, + // errors + // different types + { + `${matchkeys(list("a"), list(1), list("a"))}`, + nil, + true, + }, + // different types + { + `${matchkeys(list("a"), list(list("a"), list("a")), list("a"))}`, + nil, + true, + }, + // lists of different length is an error + { + `${matchkeys(list("a"), list("a", "b"), list("a"))}`, + nil, + true, + }, + }, + }) +} + func TestInterpolateFuncFile(t *testing.T) { tf, err := ioutil.TempFile("", "tf") if err != nil { diff --git a/helper/schema/schema.go b/helper/schema/schema.go index d04f05b35..32d172139 100644 --- a/helper/schema/schema.go +++ b/helper/schema/schema.go @@ -645,6 +645,19 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { } } + // Computed-only field + if v.Computed && !v.Optional { + if v.ValidateFunc != nil { + return fmt.Errorf("%s: ValidateFunc is for validating user input, "+ + "there's nothing to validate on computed-only field", k) + } + if v.DiffSuppressFunc != nil { + return fmt.Errorf("%s: DiffSuppressFunc is for suppressing differences"+ + " between config and state representation. "+ + "There is no config for computed-only field, nothing to compare.", k) + } + } + if v.ValidateFunc != nil { switch v.Type { case TypeList, TypeSet: @@ -744,6 +757,7 @@ func (m schemaMap) diffList( diff.Attributes[k+".#"] = &terraform.ResourceAttrDiff{ Old: oldStr, NewComputed: true, + RequiresNew: schema.ForceNew, } return nil } diff --git a/helper/schema/schema_test.go b/helper/schema/schema_test.go index d2f667576..3f8ca2329 100644 --- a/helper/schema/schema_test.go +++ b/helper/schema/schema_test.go @@ -2777,6 +2777,52 @@ func TestSchemaMap_Diff(t *testing.T) { }, }, }, + + { + Name: "List with computed schema and ForceNew", + Schema: map[string]*Schema{ + "config": &Schema{ + Type: TypeList, + Optional: true, + ForceNew: true, + Elem: &Schema{ + Type: TypeString, + }, + }, + }, + + State: &terraform.InstanceState{ + Attributes: map[string]string{ + "config.#": "2", + "config.0": "a", + "config.1": "b", + }, + }, + + Config: map[string]interface{}{ + "config": []interface{}{"${var.a}", "${var.b}"}, + }, + + ConfigVariables: map[string]ast.Variable{ + "var.a": interfaceToVariableSwallowError( + config.UnknownVariableValue), + "var.b": interfaceToVariableSwallowError( + config.UnknownVariableValue), + }, + + Diff: &terraform.InstanceDiff{ + Attributes: map[string]*terraform.ResourceAttrDiff{ + "config.#": &terraform.ResourceAttrDiff{ + Old: "2", + New: "", + RequiresNew: true, + NewComputed: true, + }, + }, + }, + + Err: false, + }, } for i, tc := range cases { @@ -3279,16 +3325,46 @@ func TestSchemaMap_InternalValidate(t *testing.T) { }, true, }, + + "computed-only field with validateFunc": { + map[string]*Schema{ + "string": &Schema{ + Type: TypeString, + Computed: true, + ValidateFunc: func(v interface{}, k string) (ws []string, es []error) { + es = append(es, fmt.Errorf("this is not fine")) + return + }, + }, + }, + true, + }, + + "computed-only field with diffSuppressFunc": { + map[string]*Schema{ + "string": &Schema{ + Type: TypeString, + Computed: true, + DiffSuppressFunc: func(k, old, new string, d *ResourceData) bool { + // Always suppress any diff + return false + }, + }, + }, + true, + }, } for tn, tc := range cases { - err := schemaMap(tc.In).InternalValidate(nil) - if err != nil != tc.Err { - if tc.Err { - t.Fatalf("%q: Expected error did not occur:\n\n%#v", tn, tc.In) + t.Run(tn, func(t *testing.T) { + err := schemaMap(tc.In).InternalValidate(nil) + if err != nil != tc.Err { + if tc.Err { + t.Fatalf("%q: Expected error did not occur:\n\n%#v", tn, tc.In) + } + t.Fatalf("%q: Unexpected error occurred: %s\n\n%#v", tn, err, tc.In) } - t.Fatalf("%q: Unexpected error occurred:\n\n%#v", tn, tc.In) - } + }) } } diff --git a/terraform/context_apply_test.go b/terraform/context_apply_test.go index 7d5bbf025..f2f84a5f2 100644 --- a/terraform/context_apply_test.go +++ b/terraform/context_apply_test.go @@ -4430,6 +4430,7 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) { p.ApplyFn = testApplyFn p.DiffFn = testDiffFn + var l sync.Mutex var calls []string pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { val, ok := c.Config["foo"] @@ -4437,6 +4438,8 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) { t.Fatalf("bad value for foo: %v %#v", val, c) } + l.Lock() + defer l.Unlock() calls = append(calls, val.(string)) return fmt.Errorf("provisioner error") } @@ -4501,6 +4504,7 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) { p.ApplyFn = testApplyFn p.DiffFn = testDiffFn + var l sync.Mutex var calls []string pr.ApplyFn = func(rs *InstanceState, c *ResourceConfig) error { val, ok := c.Config["foo"] @@ -4508,6 +4512,8 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) { t.Fatalf("bad value for foo: %v %#v", val, c) } + l.Lock() + defer l.Unlock() calls = append(calls, val.(string)) return fmt.Errorf("provisioner error") } diff --git a/terraform/context_plan_test.go b/terraform/context_plan_test.go index 7064f6465..3e8c54190 100644 --- a/terraform/context_plan_test.go +++ b/terraform/context_plan_test.go @@ -532,6 +532,9 @@ func TestContext2Plan_moduleProviderInherit(t *testing.T) { state *InstanceState, c *ResourceConfig) (*InstanceDiff, error) { v, _ := c.Get("from") + + l.Lock() + defer l.Unlock() calls = append(calls, v.(string)) return testDiffFn(info, state, c) } @@ -628,6 +631,9 @@ func TestContext2Plan_moduleProviderDefaults(t *testing.T) { state *InstanceState, c *ResourceConfig) (*InstanceDiff, error) { v, _ := c.Get("from") + + l.Lock() + defer l.Unlock() calls = append(calls, v.(string)) return testDiffFn(info, state, c) } @@ -677,6 +683,8 @@ func TestContext2Plan_moduleProviderDefaultsVar(t *testing.T) { buf.WriteString(v.(string) + "\n") } + l.Lock() + defer l.Unlock() calls = append(calls, buf.String()) return nil } diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/LICENSE b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/LICENSE new file mode 100644 index 000000000..9fb7e22bc --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/LICENSE @@ -0,0 +1,202 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright (c) 2016 1&1 Internet SE + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/README.md b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/README.md new file mode 100644 index 000000000..adb9cd19b --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/README.md @@ -0,0 +1,2573 @@ +# 1&1 Cloudserver Go SDK + +The 1&1 Go SDK is a Go library designed for interaction with the 1&1 cloud platform over the REST API. + +This guide contains instructions on getting started with the library and automating various management tasks available through the 1&1 Cloud Panel UI. + +## Table of Contents + +- [Overview](#overview) +- [Getting Started](#getting-started) + - [Installation](#installation) + - [Authentication](#authentication) +- [Operations](#operations) + - [Servers](#servers) + - [Images](#images) + - [Shared Storages](#shared-storages) + - [Firewall Policies](#firewall-policies) + - [Load Balancers](#load-balancers) + - [Public IPs](#public-ips) + - [Private Networks](#private-networks) + - [VPNs](#vpns) + - [Monitoring Center](#monitoring-center) + - [Monitoring Policies](#monitoring-policies) + - [Logs](#logs) + - [Users](#users) + - [Roles](#roles) + - [Usages](#usages) + - [Server Appliances](#server-appliances) + - [DVD ISO](#dvd-iso) + - [Ping](#ping) + - [Pricing](#pricing) + - [Data Centers](#data-centers) +- [Examples](#examples) +- [Index](#index) + +## Overview + +This SDK is a wrapper for the 1&1 REST API written in Go(lang). All operations against the API are performed over SSL and authenticated using your 1&1 token key. The Go library facilitates the access to the REST API either within an instance running on 1&1 platform or directly across the Internet from any HTTPS-enabled application. + +For more information on the 1&1 Cloud Server SDK for Go, visit the [Community Portal](https://www.1and1.com/cloud-community/). + +## Getting Started + +Before you begin you will need to have signed up for a 1&1 account. The credentials you create during sign-up will be used to authenticate against the API. + +Install the Go language tools. Find the install package and instructions on the official Go website. Make sure that you have set up the `GOPATH` environment variable properly, as indicated in the instructions. + +### Installation + +The official Go library is available from the 1&1 GitHub account found here. + +Use the following Go command to download oneandone-cloudserver-sdk-go to your configured GOPATH: + +`go get github.com/1and1/oneandone-cloudserver-sdk-go` + +Import the library in your Go code: + +`import "github.com/1and1/oneandone-cloudserver-sdk-go"` + +### Authentication + +Set the authentication token and create the API client: + +``` +token := oneandone.SetToken("82ee732b8d47e451be5c6ad5b7b56c81") +api := oneandone.New(token, oneandone.BaseUrl) +``` + +Refer to the [Examples](#examples) and [Operations](#operations) sections for additional information. + +## Operations + +### Servers + +**List all servers:** + +`servers, err := api.ListServers()` + +Alternatively, use the method with query parameters. + +`servers, err := api.ListServers(page, per_page, sort, query, fields)` + +To paginate the list of servers received in the response use `page` and `per_page` parameters. Set `per_page` to the number of servers that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of servers sorted in expected order pass a server property (e.g. `"name"`) in `sort` parameter. + +Use `query` parameter to search for a string in the response and return only the server instances that contain it. + +To retrieve a collection of servers containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,description,hardware.ram"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a single server:** + +`server, err := api.GetServer(server_id)` + +**List fixed-size server templates:** + +`fiss, err := api.ListFixedInstanceSizes()` + +**Retrieve information about a fixed-size server template:** + +`fis, err := api.GetFixedInstanceSize(fis_id)` + +**Retrieve information about a server's hardware:** + +`hardware, err := api.GetServerHardware(server_id)` + +**List a server's HDDs:** + +`hdds, err := api.ListServerHdds(server_id)` + +**Retrieve a single server HDD:** + +`hdd, err := api.GetServerHdd(server_id, hdd_id)` + +**Retrieve information about a server's image:** + +`image, err := api.GetServerImage(server_id)` + +**List a server's IPs:** + +`ips, err := api.ListServerIps(server_id)` + +**Retrieve information about a single server IP:** + +`ip, err := api.GetServerIp(server_id, ip_id)` + +**Retrieve information about a server's firewall policy:** + +`firewall, err := api.GetServerIpFirewallPolicy(server_id, ip_id)` + +**List all load balancers assigned to a server IP:** + +`lbs, err := api.ListServerIpLoadBalancers(server_id, ip_id)` + +**Retrieve information about a server's status:** + +`status, err := api.GetServerStatus(server_id)` + +**Retrieve information about the DVD loaded into the virtual DVD unit of a server:** + +`dvd, err := api.GetServerDvd(server_id)` + +**List a server's private networks:** + +`pns, err := api.ListServerPrivateNetworks(server_id)` + +**Retrieve information about a server's private network:** + +`pn, err := api.GetServerPrivateNetwork(server_id, pn_id)` + +**Retrieve information about a server's snapshot:** + +`snapshot, err := api.GetServerSnapshot(server_id)` + +**Create a server:** + +``` +req := oneandone.ServerRequest { + Name: "Server Name", + Description: "Server description.", + ApplianceId: server_appliance_id, + PowerOn: true, + Hardware: oneandone.Hardware { + Vcores: 1, + CoresPerProcessor: 1, + Ram: 2, + Hdds: []oneandone.Hdd { + oneandone.Hdd { + Size: 100, + IsMain: true, + }, + }, + }, + } + +server_id, server, err := api.CreateServer(&req) +``` + +**Create a fixed-size server and return back the server's IP address and first password:** + +``` +req := oneandone.ServerRequest { + Name: server_name, + ApplianceId: server_appliance_id, + PowerOn: true_or_false, + Hardware: oneandone.Hardware { + FixedInsSizeId: fixed_instance_size_id, + }, + } + +ip_address, password, err := api.CreateServerEx(&req, timeout) +``` + +**Update a server:** + +`server, err := api.RenameServer(server_id, new_name, new_desc)` + +**Delete a server:** + +`server, err := api.DeleteServer(server_id, keep_ips)` + +Set `keep_ips` parameter to `true` for keeping server IPs after deleting a server. + +**Update a server's hardware:** + +``` +hardware := oneandone.Hardware { + Vcores: 2, + CoresPerProcessor: 1, + Ram: 2, + } + +server, err := api.UpdateServerHardware(server_id, &hardware) +``` + +**Add new hard disk(s) to a server:** + +``` +hdds := oneandone.ServerHdds { + Hdds: []oneandone.Hdd { + { + Size: 50, + IsMain: false, + }, + }, + } + +server, err := api.AddServerHdds(server_id, &hdds) +``` + +**Resize a server's hard disk:** + +`server, err := api.ResizeServerHdd(server_id, hdd_id, new_size)` + +**Remove a server's hard disk:** + +`server, err := api.DeleteServerHdd(server_id, hdd_id)` + +**Load a DVD into the virtual DVD unit of a server:** + +`server, err := api.LoadServerDvd(server_id, dvd_id)` + +**Unload a DVD from the virtual DVD unit of a server:** + +`server, err := api.EjectServerDvd(server_id)` + +**Reinstall a new image into a server:** + +`server, err := api.ReinstallServerImage(server_id, image_id, password, fp_id)` + +**Assign a new IP to a server:** + +`server, err := api.AssignServerIp(server_id, ip_type)` + +**Release an IP and optionally remove it from a server:** + +`server, err := api.DeleteServerIp(server_id, ip_id, keep_ip)` + +Set `keep_ip` to true for releasing the IP without removing it. + +**Assign a new firewall policy to a server's IP:** + +`server, err := api.AssignServerIpFirewallPolicy(server_id, ip_id, fp_id)` + +**Remove a firewall policy from a server's IP:** + +`server, err := api.UnassignServerIpFirewallPolicy(server_id, ip_id)` + +**Assign a new load balancer to a server's IP:** + +`server, err := api.AssignServerIpLoadBalancer(server_id, ip_id, lb_id)` + +**Remove a load balancer from a server's IP:** + +`server, err := api.UnassignServerIpLoadBalancer(server_id, ip_id, lb_id)` + +**Start a server:** + +`server, err := api.StartServer(server_id)` + +**Reboot a server:** + +`server, err := api.RebootServer(server_id, is_hardware)` + +Set `is_hardware` to true for HARDWARE method of rebooting. + +Set `is_hardware` to false for SOFTWARE method of rebooting. + +**Shutdown a server:** + +`server, err := api.ShutdownServer(server_id, is_hardware)` + +Set `is_hardware` to true for HARDWARE method of powering off. + +Set `is_hardware` to false for SOFTWARE method of powering off. + +**Assign a private network to a server:** + +`server, err := api.AssignServerPrivateNetwork(server_id, pn_id)` + +**Remove a server's private network:** + +`server, err := api.RemoveServerPrivateNetwork(server_id, pn_id)` + +**Create a new server's snapshot:** + +`server, err := api.CreateServerSnapshot(server_id)` + +**Restore a server's snapshot:** + +`server, err := api.RestoreServerSnapshot(server_id, snapshot_id)` + +**Remove a server's snapshot:** + +`server, err := api.DeleteServerSnapshot(server_id, snapshot_id);` + +**Clone a server:** + +`server, err := api.CloneServer(server_id, new_name)` + + +### Images + +**List all images:** + +`images, err = api.ListImages()` + +Alternatively, use the method with query parameters. + +`images, err = api.ListImages(page, per_page, sort, query, fields)` + +To paginate the list of images received in the response use `page` and `per_page` parameters. set `per_page` to the number of images that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of images sorted in expected order pass an image property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the elements that contain it. + +To retrieve a collection of images containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a single image:** + +`image, err = api.GetImage(image_id)` + + +**Create an image:** + +``` +request := oneandone.ImageConfig { + Name: image_name, + Description: image_description, + ServerId: server_id, + Frequency: image_frequenct, + NumImages: number_of_images, + } + +image_id, image, err = api.CreateImage(&request) +``` +All fields except `Description` are required. `Frequency` may be set to `"ONCE"`, `"DAILY"` or `"WEEKLY"`. + +**Update an image:** + + +`image, err = api.UpdateImage(image_id, new_name, new_description, new_frequenct)` + +If any of the parameters `new_name`, `new_description` or `new_frequenct` is set to an empty string, it is ignored in the request. `Frequency` may be set to `"ONCE"`, `"DAILY"` or `"WEEKLY"`. + +**Delete an image:** + +`image, err = api.DeleteImage(image_id)` + +### Shared Storages + +`ss, err := api.ListSharedStorages()` + +Alternatively, use the method with query parameters. + +`ss, err := api.ListSharedStorages(page, per_page, sort, query, fields)` + +To paginate the list of shared storages received in the response use `page` and `per_page` parameters. Set `per_page` to the number of volumes that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of shared storages sorted in expected order pass a volume property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the volume instances that contain it. + +To retrieve a collection of shared storages containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,size,size_used"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a shared storage:** + +`ss, err := api.GetSharedStorage(ss_id)` + + +**Create a shared storage:** + +``` +request := oneandone.SharedStorageRequest { + Name: test_ss_name, + Description: test_ss_desc, + Size: oneandone.Int2Pointer(size), + } + +ss_id, ss, err := api.CreateSharedStorage(&request) + +``` +`Description` is optional parameter. + + +**Update a shared storage:** + +``` +request := oneandone.SharedStorageRequest { + Name: new_name, + Description: new_desc, + Size: oneandone.Int2Pointer(new_size), + } + +ss, err := api.UpdateSharedStorage(ss_id, &request) +``` +All request's parameters are optional. + + +**Remove a shared storage:** + +`ss, err := api.DeleteSharedStorage(ss_id)` + + +**List a shared storage servers:** + +`ss_servers, err := api.ListSharedStorageServers(ss_id)` + + +**Retrieve a shared storage server:** + +`ss_server, err := api.GetSharedStorageServer(ss_id, server_id)` + + +**Add servers to a shared storage:** + +``` +servers := []oneandone.SharedStorageServer { + { + Id: server_id, + Rights: permissions, + } , + } + +ss, err := api.AddSharedStorageServers(ss_id, servers) +``` +`Rights` may be set to `R` or `RW` string. + + +**Remove a server from a shared storage:** + +`ss, err := api.DeleteSharedStorageServer(ss_id, server_id)` + + +**Retrieve the credentials for accessing the shared storages:** + +`ss_credentials, err := api.GetSharedStorageCredentials()` + + +**Change the password for accessing the shared storages:** + +`ss_credentials, err := api.UpdateSharedStorageCredentials(new_password)` + + +### Firewall Policies + +**List firewall policies:** + +`firewalls, err := api.ListFirewallPolicies()` + +Alternatively, use the method with query parameters. + +`firewalls, err := api.ListFirewallPolicies(page, per_page, sort, query, fields)` + +To paginate the list of firewall policies received in the response use `page` and `per_page` parameters. Set `per_page` to the number of firewall policies that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of firewall policies sorted in expected order pass a firewall policy property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the firewall policy instances that contain it. + +To retrieve a collection of firewall policies containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a single firewall policy:** + +`firewall, err := api.GetFirewallPolicy(fp_id)` + + +**Create a firewall policy:** + +``` +request := oneandone.FirewallPolicyRequest { + Name: fp_name, + Description: fp_desc, + Rules: []oneandone.FirewallPolicyRule { + { + Protocol: protocol, + PortFrom: oneandone.Int2Pointer(port_from), + PortTo: oneandone.Int2Pointer(port_to), + SourceIp: source_ip, + }, + }, + } + +firewall_id, firewall, err := api.CreateFirewallPolicy(&request) +``` +`SourceIp` and `Description` are optional parameters. + + +**Update a firewall policy:** + +`firewall, err := api.UpdateFirewallPolicy(fp_id, fp_new_name, fp_new_description)` + +Passing an empty string in `fp_new_name` or `fp_new_description` skips updating the firewall policy name or description respectively. + + +**Delete a firewall policy:** + +`firewall, err := api.DeleteFirewallPolicy(fp_id)` + + +**List servers/IPs attached to a firewall policy:** + +`server_ips, err := api.ListFirewallPolicyServerIps(fp_id)` + + +**Retrieve information about a server/IP assigned to a firewall policy:** + +`server_ip, err := api.GetFirewallPolicyServerIp(fp_id, ip_id)` + + +**Add servers/IPs to a firewall policy:** + +`firewall, err := api.AddFirewallPolicyServerIps(fp_id, ip_ids)` + +`ip_ids` is a slice of IP ID's. + + +**Remove a server/IP from a firewall policy:** + +`firewall, err := api.DeleteFirewallPolicyServerIp(fp_id, ip_id)` + + +**List rules of a firewall policy:** + +`fp_rules, err := api.ListFirewallPolicyRules(fp_id)` + + +**Retrieve information about a rule of a firewall policy:** + +`fp_rule, err := api.GetFirewallPolicyRule(fp_id, rule_id)` + + +**Adds new rules to a firewall policy:** + +``` +fp_rules := []oneandone.FirewallPolicyRule { + { + Protocol: protocol1, + PortFrom: oneandone.Int2Pointer(port_from1), + PortTo: oneandone.Int2Pointer(port_to1), + SourceIp: source_ip, + }, + { + Protocol: protocol2, + PortFrom: oneandone.Int2Pointer(port_from2), + PortTo: oneandone.Int2Pointer(port_to2), + }, + } + +firewall, err := api.AddFirewallPolicyRules(fp_id, fp_rules) +``` + +**Remove a rule from a firewall policy:** + +`firewall, err := api.DeleteFirewallPolicyRule(fp_id, rule_id)` + + +### Load Balancers + +**List load balancers:** + +`loadbalancers, err := api.ListLoadBalancers()` + +Alternatively, use the method with query parameters. + +`loadbalancers, err := api.ListLoadBalancers(page, per_page, sort, query, fields)` + +To paginate the list of load balancers received in the response use `page` and `per_page` parameters. Set `per_page` to the number of load balancers that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of load balancers sorted in expected order pass a load balancer property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the load balancer instances that contain it. + +To retrieve a collection of load balancers containing only the requested fields pass a list of comma separated properties (e.g. `"ip,name,method"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a single load balancer:** + +`loadbalancer, err := api.GetLoadBalancer(lb_id)` + + +**Create a load balancer:** + +``` +request := oneandone.LoadBalancerRequest { + Name: lb_name, + Description: lb_description, + Method: lb_method, + Persistence: oneandone.Bool2Pointer(true_or_false), + PersistenceTime: oneandone.Int2Pointer(seconds1), + HealthCheckTest: protocol1, + HealthCheckInterval: oneandone.Int2Pointer(seconds2), + HealthCheckPath: health_check_path, + HealthCheckPathParser: health_check_path_parser, + Rules: []oneandone.LoadBalancerRule { + { + Protocol: protocol1, + PortBalancer: lb_port, + PortServer: server_port, + Source: source_ip, + }, + }, + } + +loadbalancer_id, loadbalancer, err := api.CreateLoadBalancer(&request) +``` +Optional parameters are `HealthCheckPath`, `HealthCheckPathParser`, `Source` and `Description`. Load balancer `Method` must be set to `"ROUND_ROBIN"` or `"LEAST_CONNECTIONS"`. + +**Update a load balancer:** +``` +request := oneandone.LoadBalancerRequest { + Name: new_name, + Description: new_description, + Persistence: oneandone.Bool2Pointer(true_or_false), + PersistenceTime: oneandone.Int2Pointer(new_seconds1), + HealthCheckTest: new_protocol, + HealthCheckInterval: oneandone.Int2Pointer(new_seconds2), + HealthCheckPath: new_path, + HealthCheckPathParser: new_parser, + Method: new_lb_method, + } + +loadbalancer, err := api.UpdateLoadBalancer(lb_id, &request) +``` +All updatable fields are optional. + + +**Delete a load balancer:** + +`loadbalancer, err := api.DeleteLoadBalancer(lb_id)` + + +**List servers/IPs attached to a load balancer:** + +`server_ips, err := api.ListLoadBalancerServerIps(lb_id)` + + +**Retrieve information about a server/IP assigned to a load balancer:** + +`server_ip, err := api.GetLoadBalancerServerIp(lb_id, ip_id)` + + +**Add servers/IPs to a load balancer:** + +`loadbalancer, err := api.AddLoadBalancerServerIps(lb_id, ip_ids)` + +`ip_ids` is a slice of IP ID's. + + +**Remove a server/IP from a load balancer:** + +`loadbalancer, err := api.DeleteLoadBalancerServerIp(lb_id, ip_id)` + + +**List rules of a load balancer:** + +`lb_rules, err := api.ListLoadBalancerRules(lb_id)` + + +**Retrieve information about a rule of a load balancer:** + +`lb_rule, err := api.GetLoadBalancerRule(lb_id, rule_id)` + + +**Adds new rules to a load balancer:** + +``` +lb_rules := []oneandone.LoadBalancerRule { + { + Protocol: protocol1, + PortBalancer: lb_port1, + PortServer: server_port1, + Source: source_ip, + }, + { + Protocol: protocol2, + PortBalancer: lb_port2, + PortServer: server_port2, + }, + } + +loadbalancer, err := api.AddLoadBalancerRules(lb_id, lb_rules) +``` + +**Remove a rule from a load balancer:** + +`loadbalancer, err := api.DeleteLoadBalancerRule(lb_id, rule_id)` + + +### Public IPs + +**Retrieve a list of your public IPs:** + +`public_ips, err := api.ListPublicIps()` + +Alternatively, use the method with query parameters. + +`public_ips, err := api.ListPublicIps(page, per_page, sort, query, fields)` + +To paginate the list of public IPs received in the response use `page` and `per_page` parameters. Set `per_page` to the number of public IPs that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of public IPs sorted in expected order pass a public IP property (e.g. `"ip"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the public IP instances that contain it. + +To retrieve a collection of public IPs containing only the requested fields pass a list of comma separated properties (e.g. `"id,ip,reverse_dns"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + + +**Retrieve a single public IP:** + +`public_ip, err := api.GetPublicIp(ip_id)` + + +**Create a public IP:** + +`ip_id, public_ip, err := api.CreatePublicIp(ip_type, reverse_dns)` + +Both parameters are optional and may be left blank. `ip_type` may be set to `"IPV4"` or `"IPV6"`. Presently, only IPV4 is supported. + +**Update the reverse DNS of a public IP:** + +`public_ip, err := api.UpdatePublicIp(ip_id, reverse_dns)` + +If an empty string is passed in `reverse_dns,` it removes previous reverse dns of the public IP. + +**Remove a public IP:** + +`public_ip, err := api.DeletePublicIp(ip_id)` + + +### Private Networks + +**List all private networks:** + +`private_nets, err := api.ListPrivateNetworks()` + +Alternatively, use the method with query parameters. + +`private_nets, err := api.ListPrivateNetworks(page, per_page, sort, query, fields)` + +To paginate the list of private networks received in the response use `page` and `per_page` parameters. Set `per_page` to the number of private networks that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of private networks sorted in expected order pass a private network property (e.g. `"-creation_date"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the private network instances that contain it. + +To retrieve a collection of private networks containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is blank, it is ignored in the request. + +**Retrieve information about a private network:** + +`private_net, err := api.GetPrivateNetwork(pn_id)` + +**Create a new private network:** + +``` +request := oneandone.PrivateNetworkRequest { + Name: pn_name, + Description: pn_description, + NetworkAddress: network_address, + SubnetMask: subnet_mask, + } + +pnet_id, private_net, err := api.CreatePrivateNetwork(&request) +``` +Private network `Name` is required parameter. + + +**Modify a private network:** + +``` +request := oneandone.PrivateNetworkRequest { + Name: new_pn_name, + Description: new_pn_description, + NetworkAddress: new_network_address, + SubnetMask: new_subnet_mask, + } + +private_net, err := api.UpdatePrivateNetwork(pn_id, &request) +``` +All parameters in the request are optional. + + +**Delete a private network:** + +`private_net, err := api.DeletePrivateNetwork(pn_id)` + + +**List all servers attached to a private network:** + +`servers, err = := api.ListPrivateNetworkServers(pn_id)` + + +**Retrieve a server attached to a private network:** + +`server, err = := api.GetPrivateNetworkServer(pn_id, server_id)` + + +**Attach servers to a private network:** + +`private_net, err := api.AttachPrivateNetworkServers(pn_id, server_ids)` + +`server_ids` is a slice of server ID's. + +*Note:* Servers cannot be attached to a private network if they currently have a snapshot. + + +**Remove a server from a private network:** + +`private_net, err := api.DetachPrivateNetworkServer(pn_id, server_id)` + +*Note:* The server cannot be removed from a private network if it currently has a snapshot or it is powered on. + + +### VPNs + +**List all VPNs:** + +`vpns, err := api.ListVPNs()` + +Alternatively, use the method with query parameters. + +`vpns, err := api.ListVPNs(page, per_page, sort, query, fields)` + +To paginate the list of VPNs received in the response use `page` and `per_page` parameters. Set ` per_page` to the number of VPNs that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of VPNs sorted in expected order pass a VPN property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the VPN instances that contain it. + +To retrieve a collection of VPNs containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve information about a VPN:** + +`vpn, err := api.GetVPN(vpn_id)` + +**Create a VPN:** + +`vpn, err := api.CreateVPN(vpn_name, vpn_description, datacenter_id)` + +**Modify a VPN:** + +`vpn, err := api.ModifyVPN(vpn_id, new_name, new_description)` + +**Delete a VPN:** + +`vpn, err := api.DeleteVPN(vpn_id)` + +**Retrieve a VPN's configuration file:** + +`base64_encoded_string, err := api.GetVPNConfigFile(vpn_id)` + + +### Monitoring Center + +**List all usages and alerts of monitoring servers:** + +`server_usages, err := api.ListMonitoringServersUsages()` + +Alternatively, use the method with query parameters. + +`server_usages, err := api.ListMonitoringServersUsages(page, per_page, sort, query, fields)` + +To paginate the list of server usages received in the response use `page` and `per_page` parameters. Set `per_page` to the number of server usages that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of server usages sorted in expected order pass a server usage property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the usage instances that contain it. + +To retrieve a collection of server usages containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,status.state"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is blank, it is ignored in the request. + +**Retrieve the usages and alerts for a monitoring server:** + +`server_usage, err := api.GetMonitoringServerUsage(server_id, period)` + +`period` may be set to `"LAST_HOUR"`, `"LAST_24H"`, `"LAST_7D"`, `"LAST_30D"`, `"LAST_365D"` or `"CUSTOM"`. If `period` is set to `"CUSTOM"`, the `start_date` and `end_date` parameters are required to be set in **RFC 3339** date/time format (e.g. `2015-13-12T00:01:00Z`). + +`server_usage, err := api.GetMonitoringServerUsage(server_id, period, start_date, end_date)` + +### Monitoring Policies + +**List all monitoring policies:** + +`mon_policies, err := api.ListMonitoringPolicies()` + +Alternatively, use the method with query parameters. + +`mon_policies, err := api.ListMonitoringPolicies(page, per_page, sort, query, fields)` + +To paginate the list of monitoring policies received in the response use `page` and `per_page` parameters. Set `per_page` to the number of monitoring policies that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of monitoring policies sorted in expected order pass a monitoring policy property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the monitoring policy instances that contain it. + +To retrieve a collection of monitoring policies containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a single monitoring policy:** + +`mon_policy, err := api.GetMonitoringPolicy(mp_id)` + + +**Create a monitoring policy:** + +``` +request := oneandone.MonitoringPolicy { + Name: mp_name, + Description: mp_desc, + Email: mp_mail, + Agent: true_or_false, + Thresholds: &oneandone.MonitoringThreshold { + Cpu: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + }, + Ram: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + }, + Disk: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + }, + Transfer: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + }, + InternalPing: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: threshold_value, + Alert: true_or_false, + }, + }, + }, + Ports: []oneandone.MonitoringPort { + { + Protocol: protocol, + Port: port, + AlertIf: responding_or_not_responding, + EmailNotification: true_or_false, + }, + }, + Processes: []oneandone.MonitoringProcess { + { + Process: process_name, + AlertIf: running_or_not_running, + EmailNotification: true_or_false, + }, + }, + } + +mpolicy_id, mon_policy, err := api.CreateMonitoringPolicy(&request) +``` +All fields, except `Description`, are required. `AlertIf` property accepts values `"RESPONDING"`/`"NOT_RESPONDING"` for ports, and `"RUNNING"`/`"NOT_RUNNING"` for processes. + + +**Update a monitoring policy:** + +``` +request := oneandone.MonitoringPolicy { + Name: new_mp_name, + Description: new_mp_desc, + Email: new_mp_mail, + Thresholds: &oneandone.MonitoringThreshold { + Cpu: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + }, + Ram: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + }, + Disk: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + }, + Transfer: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + }, + InternalPing: &oneandone.MonitoringLevel { + Warning: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + Critical: &oneandone.MonitoringValue { + Value: new_threshold_value, + Alert: true_or_false, + }, + }, + }, + } + +mon_policy, err := api.UpdateMonitoringPolicy(mp_id, &request) +``` +All fields of the request are optional. When a threshold is specified in the request, the threshold fields are required. + +**Delete a monitoring policy:** + +`mon_policy, err := api.DeleteMonitoringPolicy(mp_id)` + + +**List all ports of a monitoring policy:** + +`mp_ports, err := api.ListMonitoringPolicyPorts(mp_id)` + + +**Retrieve information about a port of a monitoring policy:** + +`mp_port, err := api.GetMonitoringPolicyPort(mp_id, port_id)` + + +**Add new ports to a monitoring policy:** + +``` +mp_ports := []oneandone.MonitoringPort { + { + Protocol: protocol1, + Port: port1, + AlertIf: responding_or_not_responding, + EmailNotification: true_or_false, + }, + { + Protocol: protocol2, + Port: port2, + AlertIf: responding_or_not_responding, + EmailNotification: true_or_false, + }, + } + +mon_policy, err := api.AddMonitoringPolicyPorts(mp_id, mp_ports) +``` +Port properties are mandatory. + + +**Modify a port of a monitoring policy:** + +``` +mp_port := oneandone.MonitoringPort { + Protocol: protocol, + Port: port, + AlertIf: responding_or_not_responding, + EmailNotification: true_or_false, + } + +mon_policy, err := api.ModifyMonitoringPolicyPort(mp_id, port_id, &mp_port) +``` +*Note:* `Protocol` and `Port` cannot be changed. + + +**Remove a port from a monitoring policy:** + +`mon_policy, err := api.DeleteMonitoringPolicyPort(mp_id, port_id)` + + +**List the processes of a monitoring policy:** + +`mp_processes, err := api.ListMonitoringPolicyProcesses(mp_id)` + + +**Retrieve information about a process of a monitoring policy:** + +`mp_process, err := api.GetMonitoringPolicyProcess(mp_id, process_id)` + + +**Add new processes to a monitoring policy:** + +``` +processes := []oneandone.MonitoringProcess { + { + Process: process_name1, + AlertIf: running_or_not_running, + EmailNotification: true_or_false, + }, + { + Process: process_name2, + AlertIf: running_or_not_running, + EmailNotification: true_or_false, + }, + } + +mon_policy, err := api.AddMonitoringPolicyProcesses(mp_id, processes) +``` +All properties of the `MonitoringProcess` instance are required. + + +**Modify a process of a monitoring policy:** + +``` +process := oneandone.MonitoringProcess { + Process: process_name, + AlertIf: running_or_not_running, + EmailNotification: true_or_false, + } + +mon_policy, err := api.ModifyMonitoringPolicyProcess(mp_id, process_id, &process) +``` + +*Note:* Process name cannot be changed. + +**Remove a process from a monitoring policy:** + +`mon_policy, err := api.DeleteMonitoringPolicyProcess(mp_id, process_id)` + +**List all servers attached to a monitoring policy:** + +`mp_servers, err := api.ListMonitoringPolicyServers(mp_id)` + +**Retrieve information about a server attached to a monitoring policy:** + +`mp_server, err := api.GetMonitoringPolicyServer(mp_id, server_id)` + +**Attach servers to a monitoring policy:** + +`mon_policy, err := api.AttachMonitoringPolicyServers(mp_id, server_ids)` + +`server_ids` is a slice of server ID's. + +**Remove a server from a monitoring policy:** + +`mon_policy, err := api.RemoveMonitoringPolicyServer(mp_id, server_id)` + + +### Logs + +**List all logs:** + +`logs, err := api.ListLogs(period, nil, nil)` + +`period` can be set to `"LAST_HOUR"`, `"LAST_24H"`, `"LAST_7D"`, `"LAST_30D"`, `"LAST_365D"` or `"CUSTOM"`. If `period` is set to `"CUSTOM"`, the `start_date` and `end_date` parameters are required to be set in **RFC 3339** date/time format (e.g. `2015-13-12T00:01:00Z`). + +`logs, err := api.ListLogs(period, start_date, end_date)` + +Additional query parameters can be used. + +`logs, err := api.ListLogs(period, start_date, end_date, page, per_page, sort, query, fields)` + +To paginate the list of logs received in the response use `page` and `per_page` parameters. Set ` per_page` to the number of logs that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of logs sorted in expected order pass a logs property (e.g. `"action"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the logs instances that contain it. + +To retrieve a collection of logs containing only the requested fields pass a list of comma separated properties (e.g. `"id,action,type"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve a single log:** + +`log, err := api.GetLog(log_id)` + + +### Users + +**List all users:** + +`users, err := api.ListUsers()` + +Alternatively, use the method with query parameters. + +`users, err := api.ListUsers(page, per_page, sort, query, fields)` + +To paginate the list of users received in the response use `page` and `per_page` parameters. Set ` per_page` to the number of users that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of users sorted in expected order pass a user property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the user instances that contain it. + +To retrieve a collection of users containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date,email"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve information about a user:** + +`user, err := api.GetUser(user_id)` + +**Create a user:** + +``` +request := oneandone.UserRequest { + Name: username, + Description: user_description, + Password: password, + Email: user_email, + } + +user_id, user, err := api.CreateUser(&request) +``` + +`Name` and `Password` are required parameters. The password must contain at least 8 characters using uppercase letters, numbers and other special symbols. + +**Modify a user:** + +``` +request := oneandone.UserRequest { + Description: new_desc, + Email: new_mail, + Password: new_pass, + State: state, + } + +user, err := api.ModifyUser(user_id, &request) +``` + +All listed fields in the request are optional. `State` can be set to `"ACTIVE"` or `"DISABLED"`. + +**Delete a user:** + +`user, err := api.DeleteUser(user_id)` + +**Retrieve information about a user's API privileges:** + +`api_info, err := api.GetUserApi(user_id)` + +**Retrieve a user's API key:** + +`api_key, err := api.GetUserApiKey(user_id)` + +**List IP's from which API access is allowed for a user:** + +`allowed_ips, err := api.ListUserApiAllowedIps(user_id)` + +**Add new IP's to a user:** + +``` +user_ips := []string{ my_public_ip, "192.168.7.77", "10.81.12.101" } +user, err := api.AddUserApiAlowedIps(user_id, user_ips) +``` + +**Remove an IP and forbid API access from it:** + +`user, err := api.RemoveUserApiAllowedIp(user_id, ip)` + +**Modify a user's API privileges:** + +`user, err := api.ModifyUserApi(user_id, is_active)` + +**Renew a user's API key:** + +`user, err := api.RenewUserApiKey(user_id)` + +**Retrieve current user permissions:** + +`permissions, err := api.GetCurrentUserPermissions()` + + +### Roles + +**List all roles:** + +`roles, err := api.ListRoles()` + +Alternatively, use the method with query parameters. + +`roles, err := api.ListRoles(page, per_page, sort, query, fields)` + +To paginate the list of roles received in the response use `page` and `per_page` parameters. Set ` per_page` to the number of roles that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of roles sorted in expected order pass a role property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the role instances that contain it. + +To retrieve a collection of roles containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,creation_date"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + +**Retrieve information about a role:** + +`role, err := api.GetRole(role_id)` + +**Create a role:** + +`role, err := api.CreateRole(role_name)` + +**Clone a role:** + +`role, err := api.CloneRole(role_id, new_role_name)` + +**Modify a role:** + +`role, err := api.ModifyRole(role_id, new_name, new_description, new_state)` + +`ACTIVE` and `DISABLE` are valid values for the state. + +**Delete a role:** + +`role, err := api.DeleteRole(role_id)` + +**Retrieve information about a role's permissions:** + +`permissions, err := api.GetRolePermissions(role_id)` + +**Modify a role's permissions:** + +`role, err := api.ModifyRolePermissions(role_id, permissions)` + +**Assign users to a role:** + +`role, err := api.AssignRoleUsers(role_id, user_ids)` + +`user_ids` is a slice of user ID's. + +**List a role's users:** + +`users, err := api.ListRoleUsers(role_id)` + +**Retrieve information about a role's user:** + +`user, err := api.GetRoleUser(role_id, user_id)` + +**Remove a role's user:** + +`role, err := api.RemoveRoleUser(role_id, user_id)` + + +### Usages + +**List your usages:** + +`usages, err := api.ListUsages(period, nil, nil)` + +`period` can be set to `"LAST_HOUR"`, `"LAST_24H"`, `"LAST_7D"`, `"LAST_30D"`, `"LAST_365D"` or `"CUSTOM"`. If `period` is set to `"CUSTOM"`, the `start_date` and `end_date` parameters are required to be set in **RFC 3339** date/time format (e.g. `2015-13-12T00:01:00Z`). + +`usages, err := api.ListUsages(period, start_date, end_date)` + +Additional query parameters can be used. + +`usages, err := api.ListUsages(period, start_date, end_date, page, per_page, sort, query, fields)` + +To paginate the list of usages received in the response use `page` and `per_page` parameters. Set ` per_page` to the number of usages that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of usages sorted in expected order pass a usages property (e.g. `"name"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the usages instances that contain it. + +To retrieve a collection of usages containing only the requested fields pass a list of comma separated properties (e.g. `"id,name"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is set to an empty string, it is ignored in the request. + + +### Server Appliances + +**List all the appliances that you can use to create a server:** + +`server_appliances, err := api.ListServerAppliances()` + +Alternatively, use the method with query parameters. + +`server_appliances, err := api.ListServerAppliances(page, per_page, sort, query, fields)` + +To paginate the list of server appliances received in the response use `page` and `per_page` parameters. Set `per_page` to the number of server appliances that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of server appliances sorted in expected order pass a server appliance property (e.g. `"os"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the server appliance instances that contain it. + +To retrieve a collection of server appliances containing only the requested fields pass a list of comma separated properties (e.g. `"id,os,architecture"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is blank, it is ignored in the request. + +**Retrieve information about specific appliance:** + +`server_appliance, err := api.GetServerAppliance(appliance_id)` + + +### DVD ISO + +**List all operative systems and tools that you can load into your virtual DVD unit:** + +`dvd_isos, err := api.ListDvdIsos()` + +Alternatively, use the method with query parameters. + +`dvd_isos, err := api.ListDvdIsos(page, per_page, sort, query, fields)` + +To paginate the list of ISO DVDs received in the response use `page` and `per_page` parameters. Set `per_page` to the number of ISO DVDs that will be shown in each page. `page` indicates the current page. When set to an integer value that is less or equal to zero, the parameters are ignored by the framework. + +To receive the list of ISO DVDs sorted in expected order pass a ISO DVD property (e.g. `"type"`) in `sort` parameter. Prefix the sorting attribute with `-` sign for sorting in descending order. + +Use `query` parameter to search for a string in the response and return only the ISO DVD instances that contain it. + +To retrieve a collection of ISO DVDs containing only the requested fields pass a list of comma separated properties (e.g. `"id,name,type"`) in `fields` parameter. + +If any of the parameters `sort`, `query` or `fields` is blank, it is ignored in the request. + +**Retrieve a specific ISO image:** + +`dvd_iso, err := api.GetDvdIso(dvd_id)` + + +### Ping + +**Check if 1&1 REST API is running:** + +`response, err := api.Ping()` + +If the API is running, the response is a single-element slice `["PONG"]`. + +**Validate if 1&1 REST API is running and the authorization token is valid:** + +`response, err := api.PingAuth()` + +The response should be a single-element slice `["PONG"]` if the API is running and the token is valid. + + +### Pricing + +**Show prices for all available resources in the Cloud Panel:** + +`pricing, err := api.GetPricing()` + + +### Data Centers + +**List all 1&1 Cloud Server data centers:** + +`datacenters, err := api.ListDatacenters()` + +Here is another example of an alternative form of the list function that includes query parameters. + +`datacenters, err := api.ListDatacenters(0, 0, "country_code", "DE", "id,country_code")` + +**Retrieve a specific data center:** + +`datacenter, err := api.GetDatacenter(datacenter_id)` + + +## Examples + +```Go +package main + +import ( + "fmt" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "time" +) + +func main() { + //Set an authentication token + token := oneandone.SetToken("82ee732b8d47e451be5c6ad5b7b56c81") + //Create an API client + api := oneandone.New(token, oneandone.BaseUrl) + + // List server appliances + saps, err := api.ListServerAppliances() + + var sa oneandone.ServerAppliance + for _, a := range saps { + if a.Type == "IMAGE" { + sa = a + } + } + + // Create a server + req := oneandone.ServerRequest{ + Name: "Example Server", + Description: "Example server description.", + ApplianceId: sa.Id, + PowerOn: true, + Hardware: oneandone.Hardware{ + Vcores: 1, + CoresPerProcessor: 1, + Ram: 2, + Hdds: []oneandone.Hdd { + oneandone.Hdd { + Size: sa.MinHddSize, + IsMain: true, + }, + }, + }, + } + + server_id, server, err := api.CreateServer(&req) + + if err == nil { + // Wait until server is created and powered on for at most 60 x 10 seconds + err = api.WaitForState(server, "POWERED_ON", 10, 60) + } + + // Get the server + server, err = api.GetServer(server_id) + + // Create a load balancer + lbr := oneandone.LoadBalancerRequest { + Name: "Load Balancer Example", + Description: "API created load balancer.", + Method: "ROUND_ROBIN", + Persistence: oneandone.Bool2Pointer(true), + PersistenceTime: oneandone.Int2Pointer(1200), + HealthCheckTest: "TCP", + HealthCheckInterval: oneandone.Int2Pointer(40), + Rules: []oneandone.LoadBalancerRule { + { + Protocol: "TCP", + PortBalancer: 80, + PortServer: 80, + Source: "0.0.0.0", + }, + }, + } + + var lb *oneandone.LoadBalancer + var lb_id string + + lb_id, lb, err = api.CreateLoadBalancer(&lbr) + if err != nil { + api.WaitForState(lb, "ACTIVE", 10, 30) + } + + // Get the load balancer + lb, err = api.GetLoadBalancer(lb.Id) + + // Assign the load balancer to server's IP + server, err = api.AssignServerIpLoadBalancer(server.Id, server.Ips[0].Id, lb_id) + + // Create a firewall policy + fpr := oneandone.FirewallPolicyRequest{ + Name: "Firewall Policy Example", + Description: "API created firewall policy.", + Rules: []oneandone.FirewallPolicyRule { + { + Protocol: "TCP", + PortFrom: oneandone.Int2Pointer(80), + PortTo: oneandone.Int2Pointer(80), + }, + }, + } + + var fp *oneandone.FirewallPolicy + + fp_id, fp, err = api.CreateFirewallPolicy(&fpr) + if err == nil { + api.WaitForState(fp, "ACTIVE", 10, 30) + } + + // Get the firewall policy + fp, err = api.GetFirewallPolicy(fp_id) + + // Add servers IPs to the firewall policy. + ips := []string{ server.Ips[0].Id } + + fp, err = api.AddFirewallPolicyServerIps(fp.Id, ips) + if err == nil { + api.WaitForState(fp, "ACTIVE", 10, 60) + } + + //Shutdown the server using 'SOFTWARE' method + server, err = api.ShutdownServer(server.Id, false) + if err != nil { + err = api.WaitForState(server, "POWERED_OFF", 5, 20) + } + + // Delete the load balancer + lb, err = api.DeleteLoadBalancer(lb.Id) + if err != nil { + err = api.WaitUntilDeleted(lb) + } + + // Delete the firewall policy + fp, err = api.DeleteFirewallPolicy(fp.Id) + if err != nil { + err = api.WaitUntilDeleted(fp) + } + + // List usages in last 24h + var usages *oneandone.Usages + usages, err = api.ListUsages("LAST_24H", nil, nil) + + fmt.Println(usages.Servers) + + // List usages in last 5 hours + n := time.Now() + ed := time.Date(n.Year(), n.Month(), n.Day(), n.Hour(), n.Minute(), n.Second(), 0, time.UTC) + sd := ed.Add(-(time.Hour * 5)) + + usages, err = api.ListUsages("CUSTOM", &sd, &ed) + + //Create a shared storage + ssr := oneandone.SharedStorageRequest { + Name: "Shared Storage Example", + Description: "API alocated 100 GB disk.", + Size: oneandone.Int2Pointer(100), + } + + var ss *oneandone.SharedStorage + var ss_id string + + ss_id, ss, err = api.CreateSharedStorage(&ssr) + if err != nil { + api.WaitForState(ss, "ACTIVE", 10, 30) + } + + // List shared storages on page 1, 5 results per page and sort by 'name' field. + // Include only 'name', 'size' and 'minimum_size_allowed' fields in the result. + var shs []oneandone.SharedStorage + shs, err = api.ListSharedStorages(1, 5, "name", "", "name,size,minimum_size_allowed") + + // List all shared storages that contain 'example' string + shs, err = api.ListSharedStorages(0, 0, "", "example", "") + + // Delete the shared storage + ss, err = api.DeleteSharedStorage(ss_id) + if err == nil { + err = api.WaitUntilDeleted(ss) + } + + // Delete the server + server, err = api.DeleteServer(server.Id, false) + if err == nil { + err = api.WaitUntilDeleted(server) + } +} + +``` +The next example illustrates how to create a `TYPO3` application server of a fixed size with an initial password and a firewall policy that has just been created. + +```Go +package main + +import "github.com/1and1/oneandone-cloudserver-sdk-go" + +func main() { + token := oneandone.SetToken("bde36026df9d548f699ea97e75a7e87f") + client := oneandone.New(token, oneandone.BaseUrl) + + // Create a new firewall policy + fpr := oneandone.FirewallPolicyRequest{ + Name: "HTTPS Traffic Policy", + Rules: []oneandone.FirewallPolicyRule{ + { + Protocol: "TCP", + PortFrom: oneandone.Int2Pointer(443), + PortTo: oneandone.Int2Pointer(443), + }, + }, + } + + _, fp, err := client.CreateFirewallPolicy(&fpr) + if fp != nil && err == nil { + client.WaitForState(fp, "ACTIVE", 5, 60) + + // Look for the TYPO3 application appliance + saps, _ := client.ListServerAppliances(0, 0, "", "typo3", "") + + var sa oneandone.ServerAppliance + for _, a := range saps { + if a.Type == "APPLICATION" { + sa = a + break + } + } + + var fixed_flavours []oneandone.FixedInstanceInfo + var fixed_size_id string + + fixed_flavours, err = client.ListFixedInstanceSizes() + for _, fl := range fixed_flavours { + //look for 'M' size + if fl.Name == "M" { + fixed_size_id = fl.Id + break + } + } + + req := oneandone.ServerRequest{ + Name: "TYPO3 Server", + ApplianceId: sa.Id, + PowerOn: true, + Password: "ucr_kXW8,.2SdMU", + Hardware: oneandone.Hardware{ + FixedInsSizeId: fixed_size_id, + }, + FirewallPolicyId: fp.Id, + } + _, server, _ := client.CreateServer(&req) + if server != nil { + client.WaitForState(server, "POWERED_ON", 10, 90) + } + } +} +``` + + +## Index + +```Go +func New(token string, url string) *API +``` + +```Go +func (api *API) AddFirewallPolicyRules(fp_id string, fp_rules []FirewallPolicyRule) (*FirewallPolicy, error) +``` + +```Go +func (api *API) AddFirewallPolicyServerIps(fp_id string, ip_ids []string) (*FirewallPolicy, error) +``` + +```Go +func (api *API) AddLoadBalancerRules(lb_id string, lb_rules []LoadBalancerRule) (*LoadBalancer, error) +``` + +```Go +func (api *API) AddLoadBalancerServerIps(lb_id string, ip_ids []string) (*LoadBalancer, error) +``` + +```Go +func (api *API) AddMonitoringPolicyPorts(mp_id string, mp_ports []MonitoringPort) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) AddMonitoringPolicyProcesses(mp_id string, mp_procs []MonitoringProcess) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) AddServerHdds(server_id string, hdds *ServerHdds) (*Server, error) +``` + +```Go +func (api *API) AddSharedStorageServers(st_id string, servers []SharedStorageServer) (*SharedStorage, error) +``` + +```Go +func (api *API) AddUserApiAlowedIps(user_id string, ips []string) (*User, error) +``` + +```Go +func (api *API) AssignRoleUsers(role_id string, user_ids []string) (*Role, error) +``` + +```Go +func (api *API) AssignServerIp(server_id string, ip_type string) (*Server, error) +``` + +```Go +func (api *API) AssignServerIpFirewallPolicy(server_id string, ip_id string, fp_id string) (*Server, error) +``` + +```Go +func (api *API) AssignServerIpLoadBalancer(server_id string, ip_id string, lb_id string) (*Server, error) +``` + +```Go +func (api *API) AssignServerPrivateNetwork(server_id string, pn_id string) (*Server, error) +``` + +```Go +func (api *API) AttachMonitoringPolicyServers(mp_id string, sids []string) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) AttachPrivateNetworkServers(pn_id string, sids []string) (*PrivateNetwork, error) +``` + +```Go +func (api *API) CloneRole(role_id string, name string) (*Role, error) +``` + +```Go +func (api *API) CloneServer(server_id string, new_name string, datacenter_id string) (*Server, error) +``` + +```Go +func (api *API) CreateFirewallPolicy(fp_data *FirewallPolicyRequest) (string, *FirewallPolicy, error) +``` + +```Go +func (api *API) CreateImage(request *ImageConfig) (string, *Image, error) +``` + +```Go +func (api *API) CreateLoadBalancer(request *LoadBalancerRequest) (string, *LoadBalancer, error) +``` + +```Go +func (api *API) CreateMonitoringPolicy(mp *MonitoringPolicy) (string, *MonitoringPolicy, error) +``` + +```Go +func (api *API) CreatePrivateNetwork(request *PrivateNetworkRequest) (string, *PrivateNetwork, error) +``` + +```Go +func (api *API) CreatePublicIp(ip_type string, reverse_dns string, datacenter_id string) (string, *PublicIp, error) +``` + +```Go +func (api *API) CreateRole(name string) (string, *Role, error) +``` + +```Go +func (api *API) CreateServer(request *ServerRequest) (string, *Server, error) +``` + +```Go +func (api *API) CreateServerEx(request *ServerRequest, timeout int) (string, string, error) +``` + +```Go +func (api *API) CreateServerSnapshot(server_id string) (*Server, error) +``` + +```Go +func (api *API) CreateSharedStorage(request *SharedStorageRequest) (string, *SharedStorage, error) +``` + +```Go +func (api *API) CreateUser(user *UserRequest) (string, *User, error) +``` + +```Go +func (api *API) CreateVPN(name string, description string, datacenter_id string) (string, *VPN, error) +``` + +```Go +func (api *API) DeleteFirewallPolicy(fp_id string) (*FirewallPolicy, error) +``` + +```Go +func (api *API) DeleteFirewallPolicyRule(fp_id string, rule_id string) (*FirewallPolicy, error) +``` + +```Go +func (api *API) DeleteFirewallPolicyServerIp(fp_id string, ip_id string) (*FirewallPolicy, error) +``` + +```Go +func (api *API) DeleteImage(img_id string) (*Image, error) +``` + +```Go +func (api *API) DeleteLoadBalancer(lb_id string) (*LoadBalancer, error) +``` + +```Go +func (api *API) DeleteLoadBalancerRule(lb_id string, rule_id string) (*LoadBalancer, error) +``` + +```Go +func (api *API) DeleteLoadBalancerServerIp(lb_id string, ip_id string) (*LoadBalancer, error) +``` + +```Go +func (api *API) DeleteMonitoringPolicy(mp_id string) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) DeleteMonitoringPolicyPort(mp_id string, port_id string) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) DeleteMonitoringPolicyProcess(mp_id string, proc_id string) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) DeletePrivateNetwork(pn_id string) (*PrivateNetwork, error) +``` + +```Go +func (api *API) DeletePublicIp(ip_id string) (*PublicIp, error) +``` + +```Go +func (api *API) DeleteRole(role_id string) (*Role, error) +``` + +```Go +func (api *API) DeleteServer(server_id string, keep_ips bool) (*Server, error) +``` + +```Go +func (api *API) DeleteServerHdd(server_id string, hdd_id string) (*Server, error) +``` + +```Go +func (api *API) DeleteServerIp(server_id string, ip_id string, keep_ip bool) (*Server, error) +``` + +```Go +func (api *API) DeleteServerSnapshot(server_id string, snapshot_id string) (*Server, error) +``` + +```Go +func (api *API) DeleteSharedStorage(ss_id string) (*SharedStorage, error) +``` + +```Go +func (api *API) DeleteSharedStorageServer(st_id string, ser_id string) (*SharedStorage, error) +``` + +```Go +func (api *API) DeleteUser(user_id string) (*User, error) +``` + +```Go +func (api *API) DeleteVPN(vpn_id string) (*VPN, error) +``` + +```Go +func (api *API) DetachPrivateNetworkServer(pn_id string, pns_id string) (*PrivateNetwork, error) +``` + +```Go +func (api *API) EjectServerDvd(server_id string) (*Server, error) +``` + +```Go +func (api *API) GetCurrentUserPermissions() (*Permissions, error) +``` + +```Go +func (api *API) GetDatacenter(dc_id string) (*Datacenter, error) +``` + +```Go +func (api *API) GetDvdIso(dvd_id string) (*DvdIso, error) +``` + +```Go +func (api *API) GetFirewallPolicy(fp_id string) (*FirewallPolicy, error) +``` + +```Go +func (api *API) GetFirewallPolicyRule(fp_id string, rule_id string) (*FirewallPolicyRule, error) +``` + +```Go +func (api *API) GetFirewallPolicyServerIp(fp_id string, ip_id string) (*ServerIpInfo, error) +``` + +```Go +func (api *API) GetFixedInstanceSize(fis_id string) (*FixedInstanceInfo, error) +``` + +```Go +func (api *API) GetImage(img_id string) (*Image, error) +``` + +```Go +func (api *API) GetLoadBalancer(lb_id string) (*LoadBalancer, error) +``` + +```Go +func (api *API) GetLoadBalancerRule(lb_id string, rule_id string) (*LoadBalancerRule, error) +``` + +```Go +func (api *API) GetLoadBalancerServerIp(lb_id string, ip_id string) (*ServerIpInfo, error) +``` + +```Go +func (api *API) GetLog(log_id string) (*Log, error) +``` + +```Go +func (api *API) GetMonitoringPolicy(mp_id string) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) GetMonitoringPolicyPort(mp_id string, port_id string) (*MonitoringPort, error) +``` + +```Go +func (api *API) GetMonitoringPolicyProcess(mp_id string, proc_id string) (*MonitoringProcess, error) +``` + +```Go +func (api *API) GetMonitoringPolicyServer(mp_id string, ser_id string) (*Identity, error) +``` + +```Go +func (api *API) GetMonitoringServerUsage(ser_id string, period string, dates ...time.Time) (*MonServerUsageDetails, error) +``` + +```Go +func (api *API) GetPricing() (*Pricing, error) +``` + +```Go +func (api *API) GetPrivateNetwork(pn_id string) (*PrivateNetwork, error) +``` + +```Go +func (api *API) GetPrivateNetworkServer(pn_id string, server_id string) (*Identity, error) +``` + +```Go +func (api *API) GetPublicIp(ip_id string) (*PublicIp, error) +``` + +```Go +func (api *API) GetRole(role_id string) (*Role, error) +``` + +```Go +func (api *API) GetRolePermissions(role_id string) (*Permissions, error) +``` + +```Go +func (api *API) GetRoleUser(role_id string, user_id string) (*Identity, error) +``` + +```Go +func (api *API) GetServer(server_id string) (*Server, error) +``` + +```Go +func (api *API) GetServerAppliance(sa_id string) (*ServerAppliance, error) +``` + +```Go +func (api *API) GetServerDvd(server_id string) (*Identity, error) +``` + +```Go +func (api *API) GetServerHardware(server_id string) (*Hardware, error) +``` + +```Go +func (api *API) GetServerHdd(server_id string, hdd_id string) (*Hdd, error) +``` + +```Go +func (api *API) GetServerImage(server_id string) (*Identity, error) +``` + +```Go +func (api *API) GetServerIp(server_id string, ip_id string) (*ServerIp, error) +``` + +```Go +func (api *API) GetServerIpFirewallPolicy(server_id string, ip_id string) (*Identity, error) +``` + +```Go +func (api *API) GetServerPrivateNetwork(server_id string, pn_id string) (*PrivateNetwork, error) +``` + +```Go +func (api *API) GetServerSnapshot(server_id string) (*ServerSnapshot, error) +``` + +```Go +func (api *API) GetServerStatus(server_id string) (*Status, error) +``` + +```Go +func (api *API) GetSharedStorage(ss_id string) (*SharedStorage, error) +``` + +```Go +func (api *API) GetSharedStorageCredentials() ([]SharedStorageAccess, error) +``` + +```Go +func (api *API) GetSharedStorageServer(st_id string, ser_id string) (*SharedStorageServer, error) +``` + +```Go +func (api *API) GetUser(user_id string) (*User, error) +``` + +```Go +func (api *API) GetUserApi(user_id string) (*UserApi, error) +``` + +```Go +func (api *API) GetUserApiKey(user_id string) (*UserApiKey, error) +``` + +```Go +func (api *API) GetVPN(vpn_id string) (*VPN, error) +``` + +```Go +func (api *API) GetVPNConfigFile(vpn_id string) (string, error) +``` + +```Go +func (api *API) ListDatacenters(args ...interface{}) ([]Datacenter, error) +``` + +```Go +func (api *API) ListDvdIsos(args ...interface{}) ([]DvdIso, error) +``` + +```Go +func (api *API) ListFirewallPolicies(args ...interface{}) ([]FirewallPolicy, error) +``` + +```Go +func (api *API) ListFirewallPolicyRules(fp_id string) ([]FirewallPolicyRule, error) +``` + +```Go +func (api *API) ListFirewallPolicyServerIps(fp_id string) ([]ServerIpInfo, error) +``` + +```Go +func (api *API) ListFixedInstanceSizes() ([]FixedInstanceInfo, error) +``` + +```Go +func (api *API) ListImages(args ...interface{}) ([]Image, error) +``` + +```Go +func (api *API) ListLoadBalancerRules(lb_id string) ([]LoadBalancerRule, error) +``` + +```Go +func (api *API) ListLoadBalancerServerIps(lb_id string) ([]ServerIpInfo, error) +``` + +```Go +func (api *API) ListLoadBalancers(args ...interface{}) ([]LoadBalancer, error) +``` + +```Go +func (api *API) ListLogs(period string, sd *time.Time, ed *time.Time, args ...interface{}) ([]Log, error) +``` + +```Go +func (api *API) ListMonitoringPolicies(args ...interface{}) ([]MonitoringPolicy, error) +``` + +```Go +func (api *API) ListMonitoringPolicyPorts(mp_id string) ([]MonitoringPort, error) +``` + +```Go +func (api *API) ListMonitoringPolicyProcesses(mp_id string) ([]MonitoringProcess, error) +``` + +```Go +func (api *API) ListMonitoringPolicyServers(mp_id string) ([]Identity, error) +``` + +```Go +func (api *API) ListMonitoringServersUsages(args ...interface{}) ([]MonServerUsageSummary, error) +``` + +```Go +func (api *API) ListPrivateNetworkServers(pn_id string) ([]Identity, error) +``` + +```Go +func (api *API) ListPrivateNetworks(args ...interface{}) ([]PrivateNetwork, error) +``` + +```Go +func (api *API) ListPublicIps(args ...interface{}) ([]PublicIp, error) +``` + +```Go +func (api *API) ListRoleUsers(role_id string) ([]Identity, error) +``` + +```Go +func (api *API) ListRoles(args ...interface{}) ([]Role, error) +``` + +```Go +func (api *API) ListServerAppliances(args ...interface{}) ([]ServerAppliance, error) +``` + +```Go +func (api *API) ListServerHdds(server_id string) ([]Hdd, error) +``` + +```Go +func (api *API) ListServerIpLoadBalancers(server_id string, ip_id string) ([]Identity, error) +``` + +```Go +func (api *API) ListServerIps(server_id string) ([]ServerIp, error) +``` + +```Go +func (api *API) ListServerPrivateNetworks(server_id string) ([]Identity, error) +``` + +```Go +func (api *API) ListServers(args ...interface{}) ([]Server, error) +``` + +```Go +func (api *API) ListSharedStorageServers(st_id string) ([]SharedStorageServer, error) +``` + +```Go +func (api *API) ListSharedStorages(args ...interface{}) ([]SharedStorage, error) +``` + +```Go +func (api *API) ListUsages(period string, sd *time.Time, ed *time.Time, args ...interface{}) (*Usages, error) +``` + +```Go +func (api *API) ListUserApiAllowedIps(user_id string) ([]string, error) +``` + +```Go +func (api *API) ListUsers(args ...interface{}) ([]User, error) +``` + +```Go +func (api *API) ListVPNs(args ...interface{}) ([]VPN, error) +``` + +```Go +func (api *API) LoadServerDvd(server_id string, dvd_id string) (*Server, error) +``` + +```Go +func (api *API) ModifyMonitoringPolicyPort(mp_id string, port_id string, mp_port *MonitoringPort) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) ModifyMonitoringPolicyProcess(mp_id string, proc_id string, mp_proc *MonitoringProcess) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) ModifyRole(role_id string, name string, description string, state string) (*Role, error) +``` + +```Go +func (api *API) ModifyRolePermissions(role_id string, perm *Permissions) (*Role, error) +``` + +```Go +func (api *API) ModifyUser(user_id string, user *UserRequest) (*User, error) +``` + +```Go +func (api *API) ModifyUserApi(user_id string, active bool) (*User, error) +``` + +```Go +func (api *API) ModifyVPN(vpn_id string, name string, description string) (*VPN, error) +``` + +```Go +func (api *API) Ping() ([]string, error) +``` + +```Go +func (api *API) PingAuth() ([]string, error) +``` + +```Go +func (api *API) RebootServer(server_id string, is_hardware bool) (*Server, error) +``` + +```Go +func (api *API) ReinstallServerImage(server_id string, image_id string, password string, fp_id string) (*Server, error) +``` + +```Go +func (api *API) RemoveMonitoringPolicyServer(mp_id string, ser_id string) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) RemoveRoleUser(role_id string, user_id string) (*Role, error) +``` + +```Go +func (api *API) RemoveServerPrivateNetwork(server_id string, pn_id string) (*Server, error) +``` + +```Go +func (api *API) RemoveUserApiAllowedIp(user_id string, ip string) (*User, error) +``` + +```Go +func (api *API) RenameServer(server_id string, new_name string, new_desc string) (*Server, error) +``` + +```Go +func (api *API) RenewUserApiKey(user_id string) (*User, error) +``` + +```Go +func (api *API) ResizeServerHdd(server_id string, hdd_id string, new_size int) (*Server, error) +``` + +```Go +func (api *API) RestoreServerSnapshot(server_id string, snapshot_id string) (*Server, error) +``` + +```Go +func (api *API) ShutdownServer(server_id string, is_hardware bool) (*Server, error) +``` + +```Go +func (api *API) StartServer(server_id string) (*Server, error) +``` + +```Go +func (api *API) UnassignServerIpFirewallPolicy(server_id string, ip_id string) (*Server, error) +``` + +```Go +func (api *API) UnassignServerIpLoadBalancer(server_id string, ip_id string, lb_id string) (*Server, error) +``` + +```Go +func (api *API) UpdateFirewallPolicy(fp_id string, fp_new_name string, fp_new_desc string) (*FirewallPolicy, error) +``` + +```Go +func (api *API) UpdateImage(img_id string, new_name string, new_desc string, new_freq string) (*Image, error) +``` + +```Go +func (api *API) UpdateLoadBalancer(lb_id string, request *LoadBalancerRequest) (*LoadBalancer, error) +``` + +```Go +func (api *API) UpdateMonitoringPolicy(mp_id string, mp *MonitoringPolicy) (*MonitoringPolicy, error) +``` + +```Go +func (api *API) UpdatePrivateNetwork(pn_id string, request *PrivateNetworkRequest) (*PrivateNetwork, error) +``` + +```Go +func (api *API) UpdatePublicIp(ip_id string, reverse_dns string) (*PublicIp, error) +``` + +```Go +func (api *API) UpdateServerHardware(server_id string, hardware *Hardware) (*Server, error) +``` + +```Go +func (api *API) UpdateSharedStorage(ss_id string, request *SharedStorageRequest) (*SharedStorage, error) +``` + +```Go +func (api *API) UpdateSharedStorageCredentials(new_pass string) ([]SharedStorageAccess, error) +``` + +```Go +func (api *API) WaitForState(in ApiInstance, state string, sec time.Duration, count int) error +``` + +```Go +func (api *API) WaitUntilDeleted(in ApiInstance) error +``` + +```Go +func (fp *FirewallPolicy) GetState() (string, error) +``` + +```Go +func (im *Image) GetState() (string, error) +``` + +```Go +func (lb *LoadBalancer) GetState() (string, error) +``` + +```Go +func (mp *MonitoringPolicy) GetState() (string, error) +``` + +```Go +func (pn *PrivateNetwork) GetState() (string, error) +``` + +```Go +func (ip *PublicIp) GetState() (string, error) +``` + +```Go +func (role *Role) GetState() (string, error) +``` + +```Go +func (s *Server) GetState() (string, error) +``` + +```Go +func (ss *SharedStorage) GetState() (string, error) +``` + +```Go +func (u *User) GetState() (string, error) +``` + +```Go +func (u *User) GetState() (string, error) +``` + +```Go +func (vpn *VPN) GetState() (string, error) +``` + +```Go +func Bool2Pointer(input bool) *bool +``` + +```Go +func Int2Pointer(input int) *int +``` + +```Go +func (bp *BackupPerm) SetAll(value bool) +``` + +```Go +func (fp *FirewallPerm) SetAll(value bool) +``` + +```Go +func (imp *ImagePerm) SetAll(value bool) +``` + +```Go +unc (inp *InvoicePerm) SetAll(value bool) +``` + +```Go +func (ipp *IPPerm) SetAll(value bool) +``` + +```Go +func (lbp *LoadBalancerPerm) SetAll(value bool) +``` + +```Go +func (lp *LogPerm) SetAll(value bool) +``` + +```Go +func (mcp *MonitorCenterPerm) SetAll(value bool) +``` + +```Go +func (mpp *MonitorPolicyPerm) SetAll(value bool) +``` + +```Go +func (p *Permissions) SetAll(v bool) +``` + +```Go +func (pnp *PrivateNetworkPerm) SetAll(value bool) +``` + +```Go +func (rp *RolePerm) SetAll(value bool) +``` + +```Go +func (sp *ServerPerm) SetAll(value bool) +``` + +```Go +func (ssp *SharedStoragePerm) SetAll(value bool) +``` + +```Go +func (up *UsagePerm) SetAll(value bool) +``` + +```Go +func (up *UserPerm) SetAll(value bool) +``` + +```Go +func (vpnp *VPNPerm) SetAll(value bool) +``` + +```Go +func SetBaseUrl(newbaseurl string) string +``` + +```Go +func SetToken(newtoken string) string +``` + diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/datacenters.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/datacenters.go new file mode 100644 index 000000000..cf193fb88 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/datacenters.go @@ -0,0 +1,36 @@ +package oneandone + +import "net/http" + +type Datacenter struct { + idField + CountryCode string `json:"country_code,omitempty"` + Location string `json:"location,omitempty"` +} + +// GET /datacenters +func (api *API) ListDatacenters(args ...interface{}) ([]Datacenter, error) { + url, err := processQueryParams(createUrl(api, datacenterPathSegment), args...) + if err != nil { + return nil, err + } + result := []Datacenter{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + + return result, nil +} + +// GET /datacenters/{datacenter_id} +func (api *API) GetDatacenter(dc_id string) (*Datacenter, error) { + result := new(Datacenter) + url := createUrl(api, datacenterPathSegment, dc_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/dvdisos.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/dvdisos.go new file mode 100644 index 000000000..ba54c3f7f --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/dvdisos.go @@ -0,0 +1,48 @@ +package oneandone + +import "net/http" + +// Struct to describe a ISO image that can be used to boot a server. +// +// Values of this type describe ISO images that can be inserted into the servers virtual DVD drive. +// +// +type DvdIso struct { + Identity + OsFamily string `json:"os_family,omitempty"` + Os string `json:"os,omitempty"` + OsVersion string `json:"os_version,omitempty"` + Type string `json:"type,omitempty"` + AvailableDatacenters []string `json:"available_datacenters,omitempty"` + Architecture interface{} `json:"os_architecture,omitempty"` + ApiPtr +} + +// GET /dvd_isos +func (api *API) ListDvdIsos(args ...interface{}) ([]DvdIso, error) { + url, err := processQueryParams(createUrl(api, dvdIsoPathSegment), args...) + if err != nil { + return nil, err + } + result := []DvdIso{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// GET /dvd_isos/{id} +func (api *API) GetDvdIso(dvd_id string) (*DvdIso, error) { + result := new(DvdIso) + url := createUrl(api, dvdIsoPathSegment, dvd_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/errors.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/errors.go new file mode 100644 index 000000000..08cc9c250 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/errors.go @@ -0,0 +1,27 @@ +package oneandone + +import ( + "fmt" +) + +type errorResponse struct { + Type string `json:"type"` + Message string `json:"message"` +} + +type apiError struct { + httpStatusCode int + message string +} + +func (e apiError) Error() string { + return fmt.Sprintf("%d - %s", e.httpStatusCode, e.message) +} + +func (e *apiError) HttpStatusCode() int { + return e.httpStatusCode +} + +func (e *apiError) Message() string { + return e.message +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/firewallpolicies.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/firewallpolicies.go new file mode 100644 index 000000000..3e89c9b17 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/firewallpolicies.go @@ -0,0 +1,208 @@ +package oneandone + +import ( + "net/http" +) + +type FirewallPolicy struct { + Identity + descField + DefaultPolicy uint8 `json:"default"` + CloudpanelId string `json:"cloudpanel_id,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + State string `json:"state,omitempty"` + Rules []FirewallPolicyRule `json:"rules,omitempty"` + ServerIps []ServerIpInfo `json:"server_ips,omitempty"` + ApiPtr +} + +type FirewallPolicyRule struct { + idField + Protocol string `json:"protocol,omitempty"` + PortFrom *int `json:"port_from,omitempty"` + PortTo *int `json:"port_to,omitempty"` + SourceIp string `json:"source,omitempty"` +} + +type FirewallPolicyRequest struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + Rules []FirewallPolicyRule `json:"rules,omitempty"` +} + +// GET /firewall_policies +func (api *API) ListFirewallPolicies(args ...interface{}) ([]FirewallPolicy, error) { + url, err := processQueryParams(createUrl(api, firewallPolicyPathSegment), args...) + if err != nil { + return nil, err + } + result := []FirewallPolicy{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /firewall_policies +func (api *API) CreateFirewallPolicy(fp_data *FirewallPolicyRequest) (string, *FirewallPolicy, error) { + result := new(FirewallPolicy) + url := createUrl(api, firewallPolicyPathSegment) + err := api.Client.Post(url, &fp_data, &result, http.StatusAccepted) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /firewall_policies/{id} +func (api *API) GetFirewallPolicy(fp_id string) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + url := createUrl(api, firewallPolicyPathSegment, fp_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil + +} + +// DELETE /firewall_policies/{id} +func (api *API) DeleteFirewallPolicy(fp_id string) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + url := createUrl(api, firewallPolicyPathSegment, fp_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /firewall_policies/{id} +func (api *API) UpdateFirewallPolicy(fp_id string, fp_new_name string, fp_new_desc string) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + data := FirewallPolicyRequest{ + Name: fp_new_name, + Description: fp_new_desc, + } + url := createUrl(api, firewallPolicyPathSegment, fp_id) + err := api.Client.Put(url, &data, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /firewall_policies/{id}/server_ips +func (api *API) ListFirewallPolicyServerIps(fp_id string) ([]ServerIpInfo, error) { + result := []ServerIpInfo{} + url := createUrl(api, firewallPolicyPathSegment, fp_id, "server_ips") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// GET /firewall_policies/{id}/server_ips/{id} +func (api *API) GetFirewallPolicyServerIp(fp_id string, ip_id string) (*ServerIpInfo, error) { + result := new(ServerIpInfo) + url := createUrl(api, firewallPolicyPathSegment, fp_id, "server_ips", ip_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /firewall_policies/{id}/server_ips +func (api *API) AddFirewallPolicyServerIps(fp_id string, ip_ids []string) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + request := serverIps{ + ServerIps: ip_ids, + } + + url := createUrl(api, firewallPolicyPathSegment, fp_id, "server_ips") + err := api.Client.Post(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /firewall_policies/{id}/server_ips/{id} +func (api *API) DeleteFirewallPolicyServerIp(fp_id string, ip_id string) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + url := createUrl(api, firewallPolicyPathSegment, fp_id, "server_ips", ip_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /firewall_policies/{id}/rules +func (api *API) ListFirewallPolicyRules(fp_id string) ([]FirewallPolicyRule, error) { + result := []FirewallPolicyRule{} + url := createUrl(api, firewallPolicyPathSegment, fp_id, "rules") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /firewall_policies/{id}/rules +func (api *API) AddFirewallPolicyRules(fp_id string, fp_rules []FirewallPolicyRule) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + data := struct { + Rules []FirewallPolicyRule `json:"rules"` + }{fp_rules} + url := createUrl(api, firewallPolicyPathSegment, fp_id, "rules") + err := api.Client.Post(url, &data, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /firewall_policies/{id}/rules/{id} +func (api *API) GetFirewallPolicyRule(fp_id string, rule_id string) (*FirewallPolicyRule, error) { + result := new(FirewallPolicyRule) + url := createUrl(api, firewallPolicyPathSegment, fp_id, "rules", rule_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /firewall_policies/{id}/rules/{id} +func (api *API) DeleteFirewallPolicyRule(fp_id string, rule_id string) (*FirewallPolicy, error) { + result := new(FirewallPolicy) + url := createUrl(api, firewallPolicyPathSegment, fp_id, "rules", rule_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (fp *FirewallPolicy) GetState() (string, error) { + in, err := fp.api.GetFirewallPolicy(fp.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/images.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/images.go new file mode 100644 index 000000000..a3551cef7 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/images.go @@ -0,0 +1,110 @@ +package oneandone + +import ( + "net/http" +) + +type Image struct { + idField + ImageConfig + MinHddSize int `json:"min_hdd_size"` + Architecture *int `json:"os_architecture"` + CloudPanelId string `json:"cloudpanel_id,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + State string `json:"state,omitempty"` + OsImageType string `json:"os_image_type,omitempty"` + OsFamily string `json:"os_family,omitempty"` + Os string `json:"os,omitempty"` + OsVersion string `json:"os_version,omitempty"` + Type string `json:"type,omitempty"` + Licenses []License `json:"licenses,omitempty"` + Hdds []Hdd `json:"hdds,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + ApiPtr +} + +type ImageConfig struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + Frequency string `json:"frequency,omitempty"` + ServerId string `json:"server_id,omitempty"` + NumImages int `json:"num_images"` +} + +// GET /images +func (api *API) ListImages(args ...interface{}) ([]Image, error) { + url, err := processQueryParams(createUrl(api, imagePathSegment), args...) + if err != nil { + return nil, err + } + result := []Image{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /images +func (api *API) CreateImage(request *ImageConfig) (string, *Image, error) { + res := new(Image) + url := createUrl(api, imagePathSegment) + err := api.Client.Post(url, &request, &res, http.StatusAccepted) + if err != nil { + return "", nil, err + } + res.api = api + return res.Id, res, nil +} + +// GET /images/{id} +func (api *API) GetImage(img_id string) (*Image, error) { + result := new(Image) + url := createUrl(api, imagePathSegment, img_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /images/{id} +func (api *API) DeleteImage(img_id string) (*Image, error) { + result := new(Image) + url := createUrl(api, imagePathSegment, img_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /images/{id} +func (api *API) UpdateImage(img_id string, new_name string, new_desc string, new_freq string) (*Image, error) { + result := new(Image) + req := struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + Frequency string `json:"frequency,omitempty"` + }{Name: new_name, Description: new_desc, Frequency: new_freq} + url := createUrl(api, imagePathSegment, img_id) + err := api.Client.Put(url, &req, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (im *Image) GetState() (string, error) { + in, err := im.api.GetImage(im.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/loadbalancers.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/loadbalancers.go new file mode 100644 index 000000000..c965a25a8 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/loadbalancers.go @@ -0,0 +1,219 @@ +package oneandone + +import ( + "net/http" +) + +type LoadBalancer struct { + ApiPtr + idField + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + State string `json:"state,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + Ip string `json:"ip,omitempty"` + HealthCheckTest string `json:"health_check_test,omitempty"` + HealthCheckInterval int `json:"health_check_interval"` + HealthCheckPath string `json:"health_check_path,omitempty"` + HealthCheckPathParser string `json:"health_check_path_parser,omitempty"` + Persistence bool `json:"persistence"` + PersistenceTime int `json:"persistence_time"` + Method string `json:"method,omitempty"` + Rules []LoadBalancerRule `json:"rules,omitempty"` + ServerIps []ServerIpInfo `json:"server_ips,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + CloudPanelId string `json:"cloudpanel_id,omitempty"` +} + +type LoadBalancerRule struct { + idField + Protocol string `json:"protocol,omitempty"` + PortBalancer uint16 `json:"port_balancer"` + PortServer uint16 `json:"port_server"` + Source string `json:"source,omitempty"` +} + +type LoadBalancerRequest struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + DatacenterId string `json:"datacenter_id,omitempty"` + HealthCheckTest string `json:"health_check_test,omitempty"` + HealthCheckInterval *int `json:"health_check_interval"` + HealthCheckPath string `json:"health_check_path,omitempty"` + HealthCheckPathParser string `json:"health_check_path_parser,omitempty"` + Persistence *bool `json:"persistence"` + PersistenceTime *int `json:"persistence_time"` + Method string `json:"method,omitempty"` + Rules []LoadBalancerRule `json:"rules,omitempty"` +} + +// GET /load_balancers +func (api *API) ListLoadBalancers(args ...interface{}) ([]LoadBalancer, error) { + url, err := processQueryParams(createUrl(api, loadBalancerPathSegment), args...) + if err != nil { + return nil, err + } + result := []LoadBalancer{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /load_balancers +func (api *API) CreateLoadBalancer(request *LoadBalancerRequest) (string, *LoadBalancer, error) { + url := createUrl(api, loadBalancerPathSegment) + result := new(LoadBalancer) + err := api.Client.Post(url, &request, &result, http.StatusAccepted) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /load_balancers/{id} +func (api *API) GetLoadBalancer(lb_id string) (*LoadBalancer, error) { + url := createUrl(api, loadBalancerPathSegment, lb_id) + result := new(LoadBalancer) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /load_balancers/{id} +func (api *API) DeleteLoadBalancer(lb_id string) (*LoadBalancer, error) { + url := createUrl(api, loadBalancerPathSegment, lb_id) + result := new(LoadBalancer) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /load_balancers/{id} +func (api *API) UpdateLoadBalancer(lb_id string, request *LoadBalancerRequest) (*LoadBalancer, error) { + url := createUrl(api, loadBalancerPathSegment, lb_id) + result := new(LoadBalancer) + err := api.Client.Put(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /load_balancers/{id}/server_ips +func (api *API) ListLoadBalancerServerIps(lb_id string) ([]ServerIpInfo, error) { + result := []ServerIpInfo{} + url := createUrl(api, loadBalancerPathSegment, lb_id, "server_ips") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// GET /load_balancers/{id}/server_ips/{id} +func (api *API) GetLoadBalancerServerIp(lb_id string, ip_id string) (*ServerIpInfo, error) { + result := new(ServerIpInfo) + url := createUrl(api, loadBalancerPathSegment, lb_id, "server_ips", ip_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /load_balancers/{id}/server_ips +func (api *API) AddLoadBalancerServerIps(lb_id string, ip_ids []string) (*LoadBalancer, error) { + result := new(LoadBalancer) + request := serverIps{ + ServerIps: ip_ids, + } + url := createUrl(api, loadBalancerPathSegment, lb_id, "server_ips") + err := api.Client.Post(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /load_balancers/{id}/server_ips/{id} +func (api *API) DeleteLoadBalancerServerIp(lb_id string, ip_id string) (*LoadBalancer, error) { + result := new(LoadBalancer) + url := createUrl(api, loadBalancerPathSegment, lb_id, "server_ips", ip_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /load_balancers/{load_balancer_id}/rules +func (api *API) ListLoadBalancerRules(lb_id string) ([]LoadBalancerRule, error) { + result := []LoadBalancerRule{} + url := createUrl(api, loadBalancerPathSegment, lb_id, "rules") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /load_balancers/{load_balancer_id}/rules +func (api *API) AddLoadBalancerRules(lb_id string, lb_rules []LoadBalancerRule) (*LoadBalancer, error) { + result := new(LoadBalancer) + data := struct { + Rules []LoadBalancerRule `json:"rules"` + }{lb_rules} + url := createUrl(api, loadBalancerPathSegment, lb_id, "rules") + err := api.Client.Post(url, &data, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /load_balancers/{load_balancer_id}/rules/{rule_id} +func (api *API) GetLoadBalancerRule(lb_id string, rule_id string) (*LoadBalancerRule, error) { + result := new(LoadBalancerRule) + url := createUrl(api, loadBalancerPathSegment, lb_id, "rules", rule_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /load_balancers/{load_balancer_id}/rules/{rule_id} +func (api *API) DeleteLoadBalancerRule(lb_id string, rule_id string) (*LoadBalancer, error) { + result := new(LoadBalancer) + url := createUrl(api, loadBalancerPathSegment, lb_id, "rules", rule_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (lb *LoadBalancer) GetState() (string, error) { + in, err := lb.api.GetLoadBalancer(lb.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/logs.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/logs.go new file mode 100644 index 000000000..b16ef31d5 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/logs.go @@ -0,0 +1,50 @@ +package oneandone + +import ( + "net/http" + "time" +) + +type Log struct { + ApiPtr + idField + typeField + CloudPanelId string `json:"cloudpanel_id,omitempty"` + SiteId string `json:"site_id,omitempty"` + StartDate string `json:"start_date,omitempty"` + EndDate string `json:"end_date,omitempty"` + Action string `json:"action,omitempty"` + Duration int `json:"duration"` + Status *Status `json:"Status,omitempty"` + Resource *Identity `json:"resource,omitempty"` + User *Identity `json:"user,omitempty"` +} + +// GET /logs +func (api *API) ListLogs(period string, sd *time.Time, ed *time.Time, args ...interface{}) ([]Log, error) { + result := []Log{} + url, err := processQueryParamsExt(createUrl(api, logPathSegment), period, sd, ed, args...) + if err != nil { + return nil, err + } + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// GET /logs/{id} +func (api *API) GetLog(log_id string) (*Log, error) { + result := new(Log) + url := createUrl(api, logPathSegment, log_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/monitoringcenter.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/monitoringcenter.go new file mode 100644 index 000000000..86e899889 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/monitoringcenter.go @@ -0,0 +1,158 @@ +package oneandone + +import ( + "errors" + "net/http" + "time" +) + +type MonServerUsageSummary struct { + Identity + Agent *monitoringAgent `json:"agent,omitempty"` + Alerts *monitoringAlerts `json:"alerts,omitempty"` + Status *monitoringStatus `json:"status,omitempty"` + ApiPtr +} + +type MonServerUsageDetails struct { + Identity + Status *statusState `json:"status,omitempty"` + Agent *monitoringAgent `json:"agent,omitempty"` + Alerts *monitoringAlerts `json:"alerts,omitempty"` + CpuStatus *utilizationStatus `json:"cpu,omitempty"` + DiskStatus *utilizationStatus `json:"disk,omitempty"` + RamStatus *utilizationStatus `json:"ram,omitempty"` + PingStatus *pingStatus `json:"internal_ping,omitempty"` + TransferStatus *transferStatus `json:"transfer,omitempty"` + ApiPtr +} + +type monitoringStatus struct { + State string `json:"state,omitempty"` + Cpu *statusState `json:"cpu,omitempty"` + Disk *statusState `json:"disk,omitempty"` + InternalPing *statusState `json:"internal_ping,omitempty"` + Ram *statusState `json:"ram,omitempty"` + Transfer *statusState `json:"transfer,omitempty"` +} + +type utilizationStatus struct { + CriticalThreshold int `json:"critical,omitempty"` + WarningThreshold int `json:"warning,omitempty"` + Status string `json:"status,omitempty"` + Data []usageData `json:"data,omitempty"` + Unit *usageUnit `json:"unit,omitempty"` +} + +type pingStatus struct { + CriticalThreshold int `json:"critical,omitempty"` + WarningThreshold int `json:"warning,omitempty"` + Status string `json:"status,omitempty"` + Data []pingData `json:"data,omitempty"` + Unit *pingUnit `json:"unit,omitempty"` +} + +type transferStatus struct { + CriticalThreshold int `json:"critical,omitempty"` + WarningThreshold int `json:"warning,omitempty"` + Status string `json:"status,omitempty"` + Data []transferData `json:"data,omitempty"` + Unit *transferUnit `json:"unit,omitempty"` +} + +type monitoringAgent struct { + AgentInstalled bool `json:"agent_installed"` + MissingAgentAlert bool `json:"missing_agent_alert"` + MonitoringNeedsAgent bool `json:"monitoring_needs_agent"` +} + +type monitoringAlerts struct { + Ports *monitoringAlertInfo `json:"ports,omitempty"` + Process *monitoringAlertInfo `json:"process,omitempty"` + Resources *monitoringAlertInfo `json:"resources,omitempty"` +} + +type monitoringAlertInfo struct { + Ok int `json:"ok"` + Warning int `json:"warning"` + Critical int `json:"critical"` +} + +type usageData struct { + Date string `json:"date,omitempty"` + UsedPercent float32 `json:"used_percent"` +} + +type usageUnit struct { + UsedPercent string `json:"used_percent,omitempty"` +} + +type pingUnit struct { + PackagesLost string `json:"pl,omitempty"` + AccessTime string `json:"rta,omitempty"` +} + +type pingData struct { + Date string `json:"date,omitempty"` + PackagesLost int `json:"pl"` + AccessTime float32 `json:"rta"` +} + +type transferUnit struct { + Downstream string `json:"downstream,omitempty"` + Upstream string `json:"upstream,omitempty"` +} + +type transferData struct { + Date string `json:"date,omitempty"` + Downstream int `json:"downstream"` + Upstream int `json:"upstream"` +} + +// GET /monitoring_center +func (api *API) ListMonitoringServersUsages(args ...interface{}) ([]MonServerUsageSummary, error) { + url, err := processQueryParams(createUrl(api, monitorCenterPathSegment), args...) + if err != nil { + return nil, err + } + result := []MonServerUsageSummary{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// GET /monitoring_center/{server_id} +func (api *API) GetMonitoringServerUsage(ser_id string, period string, dates ...time.Time) (*MonServerUsageDetails, error) { + if period == "" { + return nil, errors.New("Time period must be provided.") + } + + params := make(map[string]interface{}, len(dates)+1) + params["period"] = period + + if len(dates) == 2 { + if dates[0].After(dates[1]) { + return nil, errors.New("Start date cannot be after end date.") + } + + params["start_date"] = dates[0].Format(time.RFC3339) + params["end_date"] = dates[1].Format(time.RFC3339) + + } else if len(dates) > 0 { + return nil, errors.New("Start and end dates must be provided.") + } + url := createUrl(api, monitorCenterPathSegment, ser_id) + url = appendQueryParams(url, params) + result := new(MonServerUsageDetails) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/monitoringpolicies.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/monitoringpolicies.go new file mode 100644 index 000000000..4272461b6 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/monitoringpolicies.go @@ -0,0 +1,305 @@ +package oneandone + +import ( + "net/http" +) + +type MonitoringPolicy struct { + ApiPtr + idField + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + State string `json:"state,omitempty"` + Default *int `json:"default,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + Email string `json:"email,omitempty"` + Agent bool `json:"agent"` + Servers []Identity `json:"servers,omitempty"` + Thresholds *MonitoringThreshold `json:"thresholds,omitempty"` + Ports []MonitoringPort `json:"ports,omitempty"` + Processes []MonitoringProcess `json:"processes,omitempty"` + CloudPanelId string `json:"cloudpanel_id,omitempty"` +} + +type MonitoringThreshold struct { + Cpu *MonitoringLevel `json:"cpu,omitempty"` + Ram *MonitoringLevel `json:"ram,omitempty"` + Disk *MonitoringLevel `json:"disk,omitempty"` + Transfer *MonitoringLevel `json:"transfer,omitempty"` + InternalPing *MonitoringLevel `json:"internal_ping,omitempty"` +} + +type MonitoringLevel struct { + Warning *MonitoringValue `json:"warning,omitempty"` + Critical *MonitoringValue `json:"critical,omitempty"` +} + +type MonitoringValue struct { + Value int `json:"value"` + Alert bool `json:"alert"` +} + +type MonitoringPort struct { + idField + Protocol string `json:"protocol,omitempty"` + Port int `json:"port"` + AlertIf string `json:"alert_if,omitempty"` + EmailNotification bool `json:"email_notification"` +} + +type MonitoringProcess struct { + idField + Process string `json:"process,omitempty"` + AlertIf string `json:"alert_if,omitempty"` + EmailNotification bool `json:"email_notification"` +} + +// GET /monitoring_policies +func (api *API) ListMonitoringPolicies(args ...interface{}) ([]MonitoringPolicy, error) { + url, err := processQueryParams(createUrl(api, monitorPolicyPathSegment), args...) + if err != nil { + return nil, err + } + result := []MonitoringPolicy{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /monitoring_policies +func (api *API) CreateMonitoringPolicy(mp *MonitoringPolicy) (string, *MonitoringPolicy, error) { + result := new(MonitoringPolicy) + url := createUrl(api, monitorPolicyPathSegment) + err := api.Client.Post(url, &mp, &result, http.StatusCreated) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /monitoring_policies/{id} +func (api *API) GetMonitoringPolicy(mp_id string) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + url := createUrl(api, monitorPolicyPathSegment, mp_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /monitoring_policies/{id} +func (api *API) DeleteMonitoringPolicy(mp_id string) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + url := createUrl(api, monitorPolicyPathSegment, mp_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /monitoring_policies/{id} +func (api *API) UpdateMonitoringPolicy(mp_id string, mp *MonitoringPolicy) (*MonitoringPolicy, error) { + url := createUrl(api, monitorPolicyPathSegment, mp_id) + result := new(MonitoringPolicy) + err := api.Client.Put(url, &mp, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /monitoring_policies/{id}/ports +func (api *API) ListMonitoringPolicyPorts(mp_id string) ([]MonitoringPort, error) { + result := []MonitoringPort{} + url := createUrl(api, monitorPolicyPathSegment, mp_id, "ports") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /monitoring_policies/{id}/ports +func (api *API) AddMonitoringPolicyPorts(mp_id string, mp_ports []MonitoringPort) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + data := struct { + Ports []MonitoringPort `json:"ports"` + }{mp_ports} + url := createUrl(api, monitorPolicyPathSegment, mp_id, "ports") + err := api.Client.Post(url, &data, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /monitoring_policies/{id}/ports/{id} +func (api *API) GetMonitoringPolicyPort(mp_id string, port_id string) (*MonitoringPort, error) { + result := new(MonitoringPort) + url := createUrl(api, monitorPolicyPathSegment, mp_id, "ports", port_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /monitoring_policies/{id}/ports/{id} +func (api *API) DeleteMonitoringPolicyPort(mp_id string, port_id string) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + url := createUrl(api, monitorPolicyPathSegment, mp_id, "ports", port_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /monitoring_policies/{id}/ports/{id} +func (api *API) ModifyMonitoringPolicyPort(mp_id string, port_id string, mp_port *MonitoringPort) (*MonitoringPolicy, error) { + url := createUrl(api, monitorPolicyPathSegment, mp_id, "ports", port_id) + result := new(MonitoringPolicy) + req := struct { + Ports *MonitoringPort `json:"ports"` + }{mp_port} + err := api.Client.Put(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /monitoring_policies/{id}/processes +func (api *API) ListMonitoringPolicyProcesses(mp_id string) ([]MonitoringProcess, error) { + result := []MonitoringProcess{} + url := createUrl(api, monitorPolicyPathSegment, mp_id, "processes") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /monitoring_policies/{id}/processes +func (api *API) AddMonitoringPolicyProcesses(mp_id string, mp_procs []MonitoringProcess) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + request := struct { + Processes []MonitoringProcess `json:"processes"` + }{mp_procs} + url := createUrl(api, monitorPolicyPathSegment, mp_id, "processes") + err := api.Client.Post(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /monitoring_policies/{id}/processes/{id} +func (api *API) GetMonitoringPolicyProcess(mp_id string, proc_id string) (*MonitoringProcess, error) { + result := new(MonitoringProcess) + url := createUrl(api, monitorPolicyPathSegment, mp_id, "processes", proc_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /monitoring_policies/{id}/processes/{id} +func (api *API) DeleteMonitoringPolicyProcess(mp_id string, proc_id string) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + url := createUrl(api, monitorPolicyPathSegment, mp_id, "processes", proc_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /monitoring_policies/{id}/processes/{id} +func (api *API) ModifyMonitoringPolicyProcess(mp_id string, proc_id string, mp_proc *MonitoringProcess) (*MonitoringPolicy, error) { + url := createUrl(api, monitorPolicyPathSegment, mp_id, "processes", proc_id) + result := new(MonitoringPolicy) + req := struct { + Processes *MonitoringProcess `json:"processes"` + }{mp_proc} + err := api.Client.Put(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /monitoring_policies/{id}/servers +func (api *API) ListMonitoringPolicyServers(mp_id string) ([]Identity, error) { + result := []Identity{} + url := createUrl(api, monitorPolicyPathSegment, mp_id, "servers") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /monitoring_policies/{id}/servers +func (api *API) AttachMonitoringPolicyServers(mp_id string, sids []string) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + request := servers{ + Servers: sids, + } + url := createUrl(api, monitorPolicyPathSegment, mp_id, "servers") + err := api.Client.Post(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /monitoring_policies/{id}/servers/{id} +func (api *API) GetMonitoringPolicyServer(mp_id string, ser_id string) (*Identity, error) { + result := new(Identity) + url := createUrl(api, monitorPolicyPathSegment, mp_id, "servers", ser_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /monitoring_policies/{id}/servers/{id} +func (api *API) RemoveMonitoringPolicyServer(mp_id string, ser_id string) (*MonitoringPolicy, error) { + result := new(MonitoringPolicy) + url := createUrl(api, monitorPolicyPathSegment, mp_id, "servers", ser_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (mp *MonitoringPolicy) GetState() (string, error) { + in, err := mp.api.GetMonitoringPolicy(mp.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/oneandone.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/oneandone.go new file mode 100644 index 000000000..e007fcb22 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/oneandone.go @@ -0,0 +1,163 @@ +package oneandone + +import ( + "errors" + "net/http" + "reflect" + "time" +) + +// Struct to hold the required information for accessing the API. +// +// Instances of this type contain the URL of the endpoint to access the API as well as the API access token to be used. +// They offer also all methods that allow to access the various objects that are returned by top level resources of +// the API. +type API struct { + Endpoint string + Client *restClient +} + +type ApiPtr struct { + api *API +} + +type idField struct { + Id string `json:"id,omitempty"` +} + +type typeField struct { + Type string `json:"type,omitempty"` +} + +type nameField struct { + Name string `json:"name,omitempty"` +} + +type descField struct { + Description string `json:"description,omitempty"` +} + +type countField struct { + Count int `json:"count,omitempty"` +} + +type serverIps struct { + ServerIps []string `json:"server_ips"` +} + +type servers struct { + Servers []string `json:"servers"` +} + +type ApiInstance interface { + GetState() (string, error) +} + +const ( + datacenterPathSegment = "datacenters" + dvdIsoPathSegment = "dvd_isos" + firewallPolicyPathSegment = "firewall_policies" + imagePathSegment = "images" + loadBalancerPathSegment = "load_balancers" + logPathSegment = "logs" + monitorCenterPathSegment = "monitoring_center" + monitorPolicyPathSegment = "monitoring_policies" + pingPathSegment = "ping" + pingAuthPathSegment = "ping_auth" + pricingPathSegment = "pricing" + privateNetworkPathSegment = "private_networks" + publicIpPathSegment = "public_ips" + rolePathSegment = "roles" + serverPathSegment = "servers" + serverAppliancePathSegment = "server_appliances" + sharedStoragePathSegment = "shared_storages" + usagePathSegment = "usages" + userPathSegment = "users" + vpnPathSegment = "vpns" +) + +// Struct to hold the status of an API object. +// +// Values of this type are used to represent the status of API objects like servers, firewall policies and the like. +// +// The value of the "State" field can represent fixed states like "ACTIVE" or "POWERED_ON" but also transitional +// states like "POWERING_ON" or "CONFIGURING". +// +// For fixed states the "Percent" field is empty where as for transitional states it contains the progress of the +// transition in percent. +type Status struct { + State string `json:"state"` + Percent int `json:"percent"` +} + +type statusState struct { + State string `json:"state,omitempty"` +} + +type Identity struct { + idField + nameField +} + +type License struct { + nameField +} + +// Creates a new API instance. +// +// Explanations about given token and url information can be found online under the following url TODO add url! +func New(token string, url string) *API { + api := new(API) + api.Endpoint = url + api.Client = newRestClient(token) + return api +} + +// Converts a given integer value into a pointer of the same type. +func Int2Pointer(input int) *int { + result := new(int) + *result = input + return result +} + +// Converts a given boolean value into a pointer of the same type. +func Bool2Pointer(input bool) *bool { + result := new(bool) + *result = input + return result +} + +// Performs busy-waiting for types that implement ApiInstance interface. +func (api *API) WaitForState(in ApiInstance, state string, sec time.Duration, count int) error { + if in != nil { + for i := 0; i < count; i++ { + s, err := in.GetState() + if err != nil { + return err + } + if s == state { + return nil + } + time.Sleep(sec * time.Second) + } + return errors.New(reflect.ValueOf(in).Type().String() + " operation timeout.") + } + return nil +} + +// Waits until instance is deleted for types that implement ApiInstance interface. +func (api *API) WaitUntilDeleted(in ApiInstance) error { + var err error + for in != nil { + _, err = in.GetState() + if err != nil { + if apiError, ok := err.(apiError); ok && apiError.httpStatusCode == http.StatusNotFound { + return nil + } else { + return err + } + } + time.Sleep(5 * time.Second) + } + return nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/ping.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/ping.go new file mode 100644 index 000000000..255608885 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/ping.go @@ -0,0 +1,29 @@ +package oneandone + +import "net/http" + +// GET /ping +// Returns "PONG" if API is running +func (api *API) Ping() ([]string, error) { + url := createUrl(api, pingPathSegment) + result := []string{} + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + + return result, nil +} + +// GET /ping_auth +// Returns "PONG" if the API is running and the authentication token is valid +func (api *API) PingAuth() ([]string, error) { + url := createUrl(api, pingAuthPathSegment) + result := []string{} + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/pricing.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/pricing.go new file mode 100644 index 000000000..90eb2abd9 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/pricing.go @@ -0,0 +1,40 @@ +package oneandone + +import "net/http" + +type Pricing struct { + Currency string `json:"currency,omitempty"` + Plan *pricingPlan `json:"pricing_plans,omitempty"` +} + +type pricingPlan struct { + Image *pricingItem `json:"image,omitempty"` + PublicIPs []pricingItem `json:"public_ips,omitempty"` + Servers *serverPricing `json:"servers,omitempty"` + SharedStorage *pricingItem `json:"shared_storage,omitempty"` + SoftwareLicenses []pricingItem `json:"software_licences,omitempty"` +} + +type serverPricing struct { + FixedServers []pricingItem `json:"fixed_servers,omitempty"` + FlexServers []pricingItem `json:"flexible_server,omitempty"` +} + +type pricingItem struct { + Name string `json:"name,omitempty"` + GrossPrice string `json:"price_gross,omitempty"` + NetPrice string `json:"price_net,omitempty"` + Unit string `json:"unit,omitempty"` +} + +// GET /pricing +func (api *API) GetPricing() (*Pricing, error) { + result := new(Pricing) + url := createUrl(api, pricingPathSegment) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/privatenetworks.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/privatenetworks.go new file mode 100644 index 000000000..667494e04 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/privatenetworks.go @@ -0,0 +1,149 @@ +package oneandone + +import ( + "net/http" +) + +type PrivateNetwork struct { + Identity + descField + CloudpanelId string `json:"cloudpanel_id,omitempty"` + NetworkAddress string `json:"network_address,omitempty"` + SubnetMask string `json:"subnet_mask,omitempty"` + State string `json:"state,omitempty"` + SiteId string `json:"site_id,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + Servers []Identity `json:"servers,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + ApiPtr +} + +type PrivateNetworkRequest struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + DatacenterId string `json:"datacenter_id,omitempty"` + NetworkAddress string `json:"network_address,omitempty"` + SubnetMask string `json:"subnet_mask,omitempty"` +} + +// GET /private_networks +func (api *API) ListPrivateNetworks(args ...interface{}) ([]PrivateNetwork, error) { + url, err := processQueryParams(createUrl(api, privateNetworkPathSegment), args...) + if err != nil { + return nil, err + } + result := []PrivateNetwork{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /private_networks +func (api *API) CreatePrivateNetwork(request *PrivateNetworkRequest) (string, *PrivateNetwork, error) { + result := new(PrivateNetwork) + url := createUrl(api, privateNetworkPathSegment) + err := api.Client.Post(url, &request, &result, http.StatusAccepted) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /private_networks/{id} +func (api *API) GetPrivateNetwork(pn_id string) (*PrivateNetwork, error) { + result := new(PrivateNetwork) + url := createUrl(api, privateNetworkPathSegment, pn_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /private_networks/{id} +func (api *API) UpdatePrivateNetwork(pn_id string, request *PrivateNetworkRequest) (*PrivateNetwork, error) { + result := new(PrivateNetwork) + url := createUrl(api, privateNetworkPathSegment, pn_id) + err := api.Client.Put(url, &request, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /private_networks/{id} +func (api *API) DeletePrivateNetwork(pn_id string) (*PrivateNetwork, error) { + result := new(PrivateNetwork) + url := createUrl(api, privateNetworkPathSegment, pn_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /private_networks/{id}/servers +func (api *API) ListPrivateNetworkServers(pn_id string) ([]Identity, error) { + result := []Identity{} + url := createUrl(api, privateNetworkPathSegment, pn_id, "servers") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /private_networks/{id}/servers +func (api *API) AttachPrivateNetworkServers(pn_id string, sids []string) (*PrivateNetwork, error) { + result := new(PrivateNetwork) + req := servers{ + Servers: sids, + } + url := createUrl(api, privateNetworkPathSegment, pn_id, "servers") + err := api.Client.Post(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /private_networks/{id}/servers/{id} +func (api *API) GetPrivateNetworkServer(pn_id string, server_id string) (*Identity, error) { + result := new(Identity) + url := createUrl(api, privateNetworkPathSegment, pn_id, "servers", server_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /private_networks/{id}/servers/{id} +func (api *API) DetachPrivateNetworkServer(pn_id string, pns_id string) (*PrivateNetwork, error) { + result := new(PrivateNetwork) + url := createUrl(api, privateNetworkPathSegment, pn_id, "servers", pns_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (pn *PrivateNetwork) GetState() (string, error) { + in, err := pn.api.GetPrivateNetwork(pn.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/publicips.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/publicips.go new file mode 100644 index 000000000..b0b6bd6ed --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/publicips.go @@ -0,0 +1,108 @@ +package oneandone + +import "net/http" + +type PublicIp struct { + idField + typeField + IpAddress string `json:"ip,omitempty"` + AssignedTo *assignedTo `json:"assigned_to,omitempty"` + ReverseDns string `json:"reverse_dns,omitempty"` + IsDhcp *bool `json:"is_dhcp,omitempty"` + State string `json:"state,omitempty"` + SiteId string `json:"site_id,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + ApiPtr +} + +type assignedTo struct { + Identity + typeField +} + +const ( + IpTypeV4 = "IPV4" + IpTypeV6 = "IPV6" +) + +// GET /public_ips +func (api *API) ListPublicIps(args ...interface{}) ([]PublicIp, error) { + url, err := processQueryParams(createUrl(api, publicIpPathSegment), args...) + if err != nil { + return nil, err + } + result := []PublicIp{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /public_ips +func (api *API) CreatePublicIp(ip_type string, reverse_dns string, datacenter_id string) (string, *PublicIp, error) { + res := new(PublicIp) + url := createUrl(api, publicIpPathSegment) + req := struct { + DatacenterId string `json:"datacenter_id,omitempty"` + ReverseDns string `json:"reverse_dns,omitempty"` + Type string `json:"type,omitempty"` + }{DatacenterId: datacenter_id, ReverseDns: reverse_dns, Type: ip_type} + err := api.Client.Post(url, &req, &res, http.StatusCreated) + if err != nil { + return "", nil, err + } + res.api = api + return res.Id, res, nil +} + +// GET /public_ips/{id} +func (api *API) GetPublicIp(ip_id string) (*PublicIp, error) { + result := new(PublicIp) + url := createUrl(api, publicIpPathSegment, ip_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /public_ips/{id} +func (api *API) DeletePublicIp(ip_id string) (*PublicIp, error) { + result := new(PublicIp) + url := createUrl(api, publicIpPathSegment, ip_id) + err := api.Client.Delete(url, nil, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /public_ips/{id} +func (api *API) UpdatePublicIp(ip_id string, reverse_dns string) (*PublicIp, error) { + result := new(PublicIp) + url := createUrl(api, publicIpPathSegment, ip_id) + req := struct { + ReverseDns string `json:"reverse_dns,omitempty"` + }{reverse_dns} + err := api.Client.Put(url, &req, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (ip *PublicIp) GetState() (string, error) { + in, err := ip.api.GetPublicIp(ip.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/restclient.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/restclient.go new file mode 100644 index 000000000..b200a1089 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/restclient.go @@ -0,0 +1,213 @@ +package oneandone + +import ( + "bytes" + "encoding/json" + "errors" + "fmt" + "io" + "io/ioutil" + "net/http" + p_url "net/url" + "time" +) + +type restClient struct { + token string +} + +func newRestClient(token string) *restClient { + restClient := new(restClient) + restClient.token = token + return restClient +} + +func (c *restClient) Get(url string, result interface{}, expectedStatus int) error { + return c.doRequest(url, "GET", nil, result, expectedStatus) +} + +func (c *restClient) Delete(url string, requestBody interface{}, result interface{}, expectedStatus int) error { + return c.doRequest(url, "DELETE", requestBody, result, expectedStatus) +} + +func (c *restClient) Post(url string, requestBody interface{}, result interface{}, expectedStatus int) error { + return c.doRequest(url, "POST", requestBody, result, expectedStatus) +} + +func (c *restClient) Put(url string, requestBody interface{}, result interface{}, expectedStatus int) error { + return c.doRequest(url, "PUT", requestBody, result, expectedStatus) +} + +func (c *restClient) doRequest(url string, method string, requestBody interface{}, result interface{}, expectedStatus int) error { + var bodyData io.Reader + if requestBody != nil { + data, _ := json.Marshal(requestBody) + bodyData = bytes.NewBuffer(data) + } + + request, err := http.NewRequest(method, url, bodyData) + if err != nil { + return err + } + + request.Header.Add("X-Token", c.token) + request.Header.Add("Content-Type", "application/json") + client := http.Client{} + response, err := client.Do(request) + if err = isError(response, expectedStatus, err); err != nil { + return err + } + + defer response.Body.Close() + body, err := ioutil.ReadAll(response.Body) + if err != nil { + return err + } + return c.unmarshal(body, result) +} + +func (c *restClient) unmarshal(data []byte, result interface{}) error { + err := json.Unmarshal(data, result) + if err != nil { + // handle the case when the result is an empty array instead of an object + switch err.(type) { + case *json.UnmarshalTypeError: + var ra []interface{} + e := json.Unmarshal(data, &ra) + if e != nil { + return e + } else if len(ra) > 0 { + return err + } + return nil + default: + return err + } + } + + return nil +} + +func isError(response *http.Response, expectedStatus int, err error) error { + if err != nil { + return err + } + if response != nil { + if response.StatusCode == expectedStatus { + // we got a response with the expected HTTP status code, hence no error + return nil + } + body, _ := ioutil.ReadAll(response.Body) + // extract the API's error message to be returned later + er_resp := new(errorResponse) + err = json.Unmarshal(body, er_resp) + if err != nil { + return err + } + + return apiError{response.StatusCode, fmt.Sprintf("Type: %s; Message: %s", er_resp.Type, er_resp.Message)} + } + return errors.New("Generic error - no response from the REST API service.") +} + +func createUrl(api *API, sections ...interface{}) string { + url := api.Endpoint + for _, section := range sections { + url += "/" + fmt.Sprint(section) + } + return url +} + +func makeParameterMap(args ...interface{}) (map[string]interface{}, error) { + qps := make(map[string]interface{}, len(args)) + var is_true bool + var page, per_page int + var sort, query, fields string + + for i, p := range args { + switch i { + case 0: + page, is_true = p.(int) + if !is_true { + return nil, errors.New("1st parameter must be a page number (integer).") + } else if page > 0 { + qps["page"] = page + } + case 1: + per_page, is_true = p.(int) + if !is_true { + return nil, errors.New("2nd parameter must be a per_page number (integer).") + } else if per_page > 0 { + qps["per_page"] = per_page + } + case 2: + sort, is_true = p.(string) + if !is_true { + return nil, errors.New("3rd parameter must be a sorting property string (e.g. 'name' or '-name').") + } else if sort != "" { + qps["sort"] = sort + } + case 3: + query, is_true = p.(string) + if !is_true { + return nil, errors.New("4th parameter must be a query string to look for the response.") + } else if query != "" { + qps["q"] = query + } + case 4: + fields, is_true = p.(string) + if !is_true { + return nil, errors.New("5th parameter must be fields properties string (e.g. 'id,name').") + } else if fields != "" { + qps["fields"] = fields + } + default: + return nil, errors.New("Wrong number of parameters.") + } + } + return qps, nil +} + +func processQueryParams(url string, args ...interface{}) (string, error) { + if len(args) > 0 { + params, err := makeParameterMap(args...) + if err != nil { + return "", err + } + url = appendQueryParams(url, params) + } + return url, nil +} + +func processQueryParamsExt(url string, period string, sd *time.Time, ed *time.Time, args ...interface{}) (string, error) { + var qm map[string]interface{} + var err error + if len(args) > 0 { + qm, err = makeParameterMap(args...) + if err != nil { + return "", err + } + } else { + qm = make(map[string]interface{}, 3) + } + qm["period"] = period + if sd != nil && ed != nil { + if sd.After(*ed) { + return "", errors.New("Start date cannot be after end date.") + } + qm["start_date"] = sd.Format(time.RFC3339) + qm["end_date"] = ed.Format(time.RFC3339) + } + url = appendQueryParams(url, qm) + return url, nil +} + +func appendQueryParams(url string, params map[string]interface{}) string { + queryUrl, _ := p_url.Parse(url) + parameters := p_url.Values{} + for key, value := range params { + parameters.Add(key, fmt.Sprintf("%v", value)) + } + queryUrl.RawQuery = parameters.Encode() + return queryUrl.String() +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/roles.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/roles.go new file mode 100644 index 000000000..e8aa44fee --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/roles.go @@ -0,0 +1,595 @@ +package oneandone + +import "net/http" + +type Role struct { + Identity + descField + CreationDate string `json:"creation_date,omitempty"` + State string `json:"state,omitempty"` + Default *int `json:"default,omitempty"` + Permissions *Permissions `json:"permissions,omitempty"` + Users []Identity `json:"users,omitempty"` + ApiPtr +} + +type Permissions struct { + Backups *BackupPerm `json:"backups,omitempty"` + Firewalls *FirewallPerm `json:"firewall_policies,omitempty"` + Images *ImagePerm `json:"images,omitempty"` + Invoice *InvoicePerm `json:"interactive_invoices,omitempty"` + IPs *IPPerm `json:"public_ips,omitempty"` + LoadBalancers *LoadBalancerPerm `json:"load_balancers,omitempty"` + Logs *LogPerm `json:"logs,omitempty"` + MonitorCenter *MonitorCenterPerm `json:"monitoring_center,omitempty"` + MonitorPolicies *MonitorPolicyPerm `json:"monitoring_policies,omitempty"` + PrivateNetworks *PrivateNetworkPerm `json:"private_networks,omitempty"` + Roles *RolePerm `json:"roles,omitempty"` + Servers *ServerPerm `json:"servers,omitempty"` + SharedStorage *SharedStoragePerm `json:"shared_storages,omitempty"` + Usages *UsagePerm `json:"usages,omitempty"` + Users *UserPerm `json:"users,omitempty"` + VPNs *VPNPerm `json:"vpn,omitempty"` +} + +type BackupPerm struct { + Create bool `json:"create"` + Delete bool `json:"delete"` + Show bool `json:"show"` +} + +type FirewallPerm struct { + Clone bool `json:"clone"` + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageAttachedServerIPs bool `json:"manage_attached_server_ips"` + ManageRules bool `json:"manage_rules"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +type ImagePerm struct { + Create bool `json:"create"` + Delete bool `json:"delete"` + DisableAutoCreate bool `json:"disable_automatic_creation"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +type InvoicePerm struct { + Show bool `json:"show"` +} + +type IPPerm struct { + Create bool `json:"create"` + Delete bool `json:"delete"` + Release bool `json:"release"` + SetReverseDNS bool `json:"set_reverse_dns"` + Show bool `json:"show"` +} + +type LoadBalancerPerm struct { + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageAttachedServerIPs bool `json:"manage_attached_server_ips"` + ManageRules bool `json:"manage_rules"` + Modify bool `json:"modify"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +type LogPerm struct { + Show bool `json:"show"` +} + +type MonitorCenterPerm struct { + Show bool `json:"show"` +} + +type MonitorPolicyPerm struct { + Clone bool `json:"clone"` + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageAttachedServers bool `json:"manage_attached_servers"` + ManagePorts bool `json:"manage_ports"` + ManageProcesses bool `json:"manage_processes"` + ModifyResources bool `json:"modify_resources"` + SetDescription bool `json:"set_description"` + SetEmail bool `json:"set_email"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +type PrivateNetworkPerm struct { + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageAttachedServers bool `json:"manage_attached_servers"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + SetNetworkInfo bool `json:"set_network_info"` + Show bool `json:"show"` +} + +type RolePerm struct { + Clone bool `json:"clone"` + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageUsers bool `json:"manage_users"` + Modify bool `json:"modify"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +type ServerPerm struct { + AccessKVMConsole bool `json:"access_kvm_console"` + AssignIP bool `json:"assign_ip"` + Clone bool `json:"clone"` + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageDVD bool `json:"manage_dvd"` + ManageSnapshot bool `json:"manage_snapshot"` + Reinstall bool `json:"reinstall"` + Resize bool `json:"resize"` + Restart bool `json:"restart"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` + Shutdown bool `json:"shutdown"` + Start bool `json:"start"` +} + +type SharedStoragePerm struct { + Access bool `json:"access"` + Create bool `json:"create"` + Delete bool `json:"delete"` + ManageAttachedServers bool `json:"manage_attached_servers"` + Resize bool `json:"resize"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +type UsagePerm struct { + Show bool `json:"show"` +} + +type UserPerm struct { + ChangeRole bool `json:"change_role"` + Create bool `json:"create"` + Delete bool `json:"delete"` + Disable bool `json:"disable"` + Enable bool `json:"enable"` + ManageAPI bool `json:"manage_api"` + SetDescription bool `json:"set_description"` + SetEmail bool `json:"set_email"` + SetPassword bool `json:"set_password"` + Show bool `json:"show"` +} + +type VPNPerm struct { + Create bool `json:"create"` + Delete bool `json:"delete"` + DownloadFile bool `json:"download_file"` + SetDescription bool `json:"set_description"` + SetName bool `json:"set_name"` + Show bool `json:"show"` +} + +// GET /roles +func (api *API) ListRoles(args ...interface{}) ([]Role, error) { + url, err := processQueryParams(createUrl(api, rolePathSegment), args...) + if err != nil { + return nil, err + } + result := []Role{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for _, role := range result { + role.api = api + } + return result, nil +} + +// POST /roles +func (api *API) CreateRole(name string) (string, *Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment) + req := struct { + Name string `json:"name"` + }{name} + err := api.Client.Post(url, &req, &result, http.StatusCreated) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /roles/{role_id} +func (api *API) GetRole(role_id string) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /roles/{role_id} +func (api *API) ModifyRole(role_id string, name string, description string, state string) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id) + req := struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + State string `json:"state,omitempty"` + }{Name: name, Description: description, State: state} + err := api.Client.Put(url, &req, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /roles/{role_id} +func (api *API) DeleteRole(role_id string) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id) + err := api.Client.Delete(url, nil, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /roles/{role_id}/permissions +func (api *API) GetRolePermissions(role_id string) (*Permissions, error) { + result := new(Permissions) + url := createUrl(api, rolePathSegment, role_id, "permissions") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// PUT /roles/{role_id}/permissions +func (api *API) ModifyRolePermissions(role_id string, perm *Permissions) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id, "permissions") + err := api.Client.Put(url, &perm, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /roles/{role_id}/users +func (api *API) ListRoleUsers(role_id string) ([]Identity, error) { + result := []Identity{} + url := createUrl(api, rolePathSegment, role_id, "users") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /roles/{role_id}/users +func (api *API) AssignRoleUsers(role_id string, user_ids []string) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id, "users") + req := struct { + Users []string `json:"users"` + }{user_ids} + err := api.Client.Post(url, &req, &result, http.StatusCreated) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /roles/{role_id}/users/{user_id} +func (api *API) GetRoleUser(role_id string, user_id string) (*Identity, error) { + result := new(Identity) + url := createUrl(api, rolePathSegment, role_id, "users", user_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /roles/{role_id}/users/{user_id} +func (api *API) RemoveRoleUser(role_id string, user_id string) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id, "users", user_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// POST /roles/{role_id}/clone +func (api *API) CloneRole(role_id string, name string) (*Role, error) { + result := new(Role) + url := createUrl(api, rolePathSegment, role_id, "clone") + req := struct { + Name string `json:"name"` + }{name} + err := api.Client.Post(url, &req, &result, http.StatusCreated) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +func (role *Role) GetState() (string, error) { + in, err := role.api.GetRole(role.Id) + if in == nil { + return "", err + } + return in.State, err +} + +// Sets all backups' permissions +func (bp *BackupPerm) SetAll(value bool) { + bp.Create = value + bp.Delete = value + bp.Show = value +} + +// Sets all firewall policies' permissions +func (fp *FirewallPerm) SetAll(value bool) { + fp.Clone = value + fp.Create = value + fp.Delete = value + fp.ManageAttachedServerIPs = value + fp.ManageRules = value + fp.SetDescription = value + fp.SetName = value + fp.Show = value +} + +// Sets all images' permissions +func (imp *ImagePerm) SetAll(value bool) { + imp.Create = value + imp.Delete = value + imp.DisableAutoCreate = value + imp.SetDescription = value + imp.SetName = value + imp.Show = value +} + +// Sets all invoice's permissions +func (inp *InvoicePerm) SetAll(value bool) { + inp.Show = value +} + +// Sets all IPs' permissions +func (ipp *IPPerm) SetAll(value bool) { + ipp.Create = value + ipp.Delete = value + ipp.Release = value + ipp.SetReverseDNS = value + ipp.Show = value +} + +// Sets all load balancers' permissions +func (lbp *LoadBalancerPerm) SetAll(value bool) { + lbp.Create = value + lbp.Delete = value + lbp.ManageAttachedServerIPs = value + lbp.ManageRules = value + lbp.Modify = value + lbp.SetDescription = value + lbp.SetName = value + lbp.Show = value +} + +// Sets all logs' permissions +func (lp *LogPerm) SetAll(value bool) { + lp.Show = value +} + +// Sets all monitoring center's permissions +func (mcp *MonitorCenterPerm) SetAll(value bool) { + mcp.Show = value +} + +// Sets all monitoring policies' permissions +func (mpp *MonitorPolicyPerm) SetAll(value bool) { + mpp.Clone = value + mpp.Create = value + mpp.Delete = value + mpp.ManageAttachedServers = value + mpp.ManagePorts = value + mpp.ManageProcesses = value + mpp.ModifyResources = value + mpp.SetDescription = value + mpp.SetEmail = value + mpp.SetName = value + mpp.Show = value +} + +// Sets all private networks' permissions +func (pnp *PrivateNetworkPerm) SetAll(value bool) { + pnp.Create = value + pnp.Delete = value + pnp.ManageAttachedServers = value + pnp.SetDescription = value + pnp.SetName = value + pnp.SetNetworkInfo = value + pnp.Show = value +} + +// Sets all roles' permissions +func (rp *RolePerm) SetAll(value bool) { + rp.Clone = value + rp.Create = value + rp.Delete = value + rp.ManageUsers = value + rp.Modify = value + rp.SetDescription = value + rp.SetName = value + rp.Show = value +} + +// Sets all servers' permissions +func (sp *ServerPerm) SetAll(value bool) { + sp.AccessKVMConsole = value + sp.AssignIP = value + sp.Clone = value + sp.Create = value + sp.Delete = value + sp.ManageDVD = value + sp.ManageSnapshot = value + sp.Reinstall = value + sp.Resize = value + sp.Restart = value + sp.SetDescription = value + sp.SetName = value + sp.Show = value + sp.Shutdown = value + sp.Start = value +} + +// Sets all shared storages' permissions +func (ssp *SharedStoragePerm) SetAll(value bool) { + ssp.Access = value + ssp.Create = value + ssp.Delete = value + ssp.ManageAttachedServers = value + ssp.Resize = value + ssp.SetDescription = value + ssp.SetName = value + ssp.Show = value +} + +// Sets all usages' permissions +func (up *UsagePerm) SetAll(value bool) { + up.Show = value +} + +// Sets all users' permissions +func (up *UserPerm) SetAll(value bool) { + up.ChangeRole = value + up.Create = value + up.Delete = value + up.Disable = value + up.Enable = value + up.ManageAPI = value + up.SetDescription = value + up.SetEmail = value + up.SetPassword = value + up.Show = value +} + +// Sets all VPNs' permissions +func (vpnp *VPNPerm) SetAll(value bool) { + vpnp.Create = value + vpnp.Delete = value + vpnp.DownloadFile = value + vpnp.SetDescription = value + vpnp.SetName = value + vpnp.Show = value +} + +// Sets all available permissions +func (p *Permissions) SetAll(v bool) { + if p.Backups == nil { + p.Backups = &BackupPerm{v, v, v} + } else { + p.Backups.SetAll(v) + } + if p.Firewalls == nil { + p.Firewalls = &FirewallPerm{v, v, v, v, v, v, v, v} + } else { + p.Firewalls.SetAll(v) + } + if p.Images == nil { + p.Images = &ImagePerm{v, v, v, v, v, v} + } else { + p.Images.SetAll(v) + } + if p.Invoice == nil { + p.Invoice = &InvoicePerm{v} + } else { + p.Invoice.SetAll(v) + } + if p.IPs == nil { + p.IPs = &IPPerm{v, v, v, v, v} + } else { + p.IPs.SetAll(v) + } + if p.LoadBalancers == nil { + p.LoadBalancers = &LoadBalancerPerm{v, v, v, v, v, v, v, v} + } else { + p.LoadBalancers.SetAll(v) + } + if p.Logs == nil { + p.Logs = &LogPerm{v} + } else { + p.Logs.SetAll(v) + } + if p.MonitorCenter == nil { + p.MonitorCenter = &MonitorCenterPerm{v} + } else { + p.MonitorCenter.SetAll(v) + } + if p.MonitorPolicies == nil { + p.MonitorPolicies = &MonitorPolicyPerm{v, v, v, v, v, v, v, v, v, v, v} + } else { + p.MonitorPolicies.SetAll(v) + } + if p.PrivateNetworks == nil { + p.PrivateNetworks = &PrivateNetworkPerm{v, v, v, v, v, v, v} + } else { + p.PrivateNetworks.SetAll(v) + } + if p.Roles == nil { + p.Roles = &RolePerm{v, v, v, v, v, v, v, v} + } else { + p.Roles.SetAll(v) + } + if p.Servers == nil { + p.Servers = &ServerPerm{v, v, v, v, v, v, v, v, v, v, v, v, v, v, v} + } else { + p.Servers.SetAll(v) + } + if p.SharedStorage == nil { + p.SharedStorage = &SharedStoragePerm{v, v, v, v, v, v, v, v} + } else { + p.SharedStorage.SetAll(v) + } + if p.Usages == nil { + p.Usages = &UsagePerm{v} + } else { + p.Usages.SetAll(v) + } + if p.Users == nil { + p.Users = &UserPerm{v, v, v, v, v, v, v, v, v, v} + } else { + p.Users.SetAll(v) + } + if p.VPNs == nil { + p.VPNs = &VPNPerm{v, v, v, v, v, v} + } else { + p.VPNs.SetAll(v) + } +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/serverappliances.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/serverappliances.go new file mode 100644 index 000000000..03c45f3d8 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/serverappliances.go @@ -0,0 +1,48 @@ +package oneandone + +import "net/http" + +type ServerAppliance struct { + Identity + typeField + OsInstallBase string `json:"os_installation_base,omitempty"` + OsFamily string `json:"os_family,omitempty"` + Os string `json:"os,omitempty"` + OsVersion string `json:"os_version,omitempty"` + Version string `json:"version,omitempty"` + MinHddSize int `json:"min_hdd_size"` + Architecture interface{} `json:"os_architecture"` + Licenses interface{} `json:"licenses,omitempty"` + Categories []string `json:"categories,omitempty"` + // AvailableDatacenters []string `json:"available_datacenters,omitempty"` + ApiPtr +} + +// GET /server_appliances +func (api *API) ListServerAppliances(args ...interface{}) ([]ServerAppliance, error) { + url, err := processQueryParams(createUrl(api, serverAppliancePathSegment), args...) + if err != nil { + return nil, err + } + res := []ServerAppliance{} + err = api.Client.Get(url, &res, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range res { + res[index].api = api + } + return res, nil +} + +// GET /server_appliances/{id} +func (api *API) GetServerAppliance(sa_id string) (*ServerAppliance, error) { + res := new(ServerAppliance) + url := createUrl(api, serverAppliancePathSegment, sa_id) + err := api.Client.Get(url, &res, http.StatusOK) + if err != nil { + return nil, err + } + // res.api = api + return res, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/servers.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/servers.go new file mode 100644 index 000000000..18fad51a2 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/servers.go @@ -0,0 +1,808 @@ +package oneandone + +import ( + "encoding/json" + "errors" + "math/big" + "net/http" +) + +type Server struct { + ApiPtr + Identity + descField + CloudPanelId string `json:"cloudpanel_id,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + FirstPassword string `json:"first_password,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + Status *Status `json:"status,omitempty"` + Hardware *Hardware `json:"hardware,omitempty"` + Image *Identity `json:"image,omitempty"` + Dvd *Identity `json:"dvd,omitempty"` + MonPolicy *Identity `json:"monitoring_policy,omitempty"` + Snapshot *ServerSnapshot `json:"snapshot,omitempty"` + Ips []ServerIp `json:"ips,omitempty"` + PrivateNets []Identity `json:"private_networks,omitempty"` + Alerts *ServerAlerts `json:"-"` + AlertsRaw *json.RawMessage `json:"alerts,omitempty"` +} + +type Hardware struct { + Vcores int `json:"vcore,omitempty"` + CoresPerProcessor int `json:"cores_per_processor"` + Ram float32 `json:"ram"` + Hdds []Hdd `json:"hdds,omitempty"` + FixedInsSizeId string `json:"fixed_instance_size_id,omitempty"` + ApiPtr +} + +type ServerHdds struct { + Hdds []Hdd `json:"hdds,omitempty"` +} + +type Hdd struct { + idField + Size int `json:"size,omitempty"` + IsMain bool `json:"is_main,omitempty"` + ApiPtr +} + +type serverDeployImage struct { + idField + Password string `json:"password,omitempty"` + Firewall *Identity `json:"firewall_policy,omitempty"` +} + +type ServerIp struct { + idField + typeField + Ip string `json:"ip,omitempty"` + ReverseDns string `json:"reverse_dns,omitempty"` + Firewall *Identity `json:"firewall_policy,omitempty"` + LoadBalancers []Identity `json:"load_balancers,omitempty"` + ApiPtr +} + +type ServerIpInfo struct { + idField // IP id + Ip string `json:"ip,omitempty"` + ServerName string `json:"server_name,omitempty"` +} + +type ServerSnapshot struct { + idField + CreationDate string `json:"creation_date,omitempty"` + DeletionDate string `json:"deletion_date,omitempty"` +} + +type ServerAlerts struct { + AlertSummary []serverAlertSummary + AlertDetails *serverAlertDetails +} + +type serverAlertSummary struct { + countField + typeField +} + +type serverAlertDetails struct { + Criticals []ServerAlert `json:"critical,omitempty"` + Warnings []ServerAlert `json:"warning,omitempty"` +} + +type ServerAlert struct { + typeField + descField + Date string `json:"date"` +} + +type ServerRequest struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + Hardware Hardware `json:"hardware"` + ApplianceId string `json:"appliance_id,omitempty"` + Password string `json:"password,omitempty"` + PowerOn bool `json:"power_on"` + FirewallPolicyId string `json:"firewall_policy_id,omitempty"` + IpId string `json:"ip_id,omitempty"` + LoadBalancerId string `json:"load_balancer_id,omitempty"` + MonitoringPolicyId string `json:"monitoring_policy_id,omitempty"` + DatacenterId string `json:"datacenter_id,omitempty"` + SSHKey string `json:"rsa_key,omitempty"` +} + +type ServerAction struct { + Action string `json:"action,omitempty"` + Method string `json:"method,omitempty"` +} + +type FixedInstanceInfo struct { + Identity + Hardware *Hardware `json:"hardware,omitempty"` + ApiPtr +} + +// GET /servers +func (api *API) ListServers(args ...interface{}) ([]Server, error) { + url, err := processQueryParams(createUrl(api, serverPathSegment), args...) + if err != nil { + return nil, err + } + result := []Server{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for _, s := range result { + s.api = api + s.decodeRaws() + } + return result, nil +} + +// POST /servers +func (api *API) CreateServer(request *ServerRequest) (string, *Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment) + insert2map := func(hasht map[string]interface{}, key string, value string) { + if key != "" && value != "" { + hasht[key] = value + } + } + req := make(map[string]interface{}) + hw := make(map[string]interface{}) + req["name"] = request.Name + req["description"] = request.Description + req["appliance_id"] = request.ApplianceId + req["power_on"] = request.PowerOn + insert2map(req, "password", request.Password) + insert2map(req, "firewall_policy_id", request.FirewallPolicyId) + insert2map(req, "ip_id", request.IpId) + insert2map(req, "load_balancer_id", request.LoadBalancerId) + insert2map(req, "monitoring_policy_id", request.MonitoringPolicyId) + insert2map(req, "datacenter_id", request.DatacenterId) + insert2map(req, "rsa_key", request.SSHKey) + req["hardware"] = hw + if request.Hardware.FixedInsSizeId != "" { + hw["fixed_instance_size_id"] = request.Hardware.FixedInsSizeId + } else { + hw["vcore"] = request.Hardware.Vcores + hw["cores_per_processor"] = request.Hardware.CoresPerProcessor + hw["ram"] = request.Hardware.Ram + hw["hdds"] = request.Hardware.Hdds + } + err := api.Client.Post(url, &req, &result, http.StatusAccepted) + if err != nil { + return "", nil, err + } + result.api = api + result.decodeRaws() + return result.Id, result, nil +} + +// This is a wraper function for `CreateServer` that returns the server's IP address and first password. +// The function waits at most `timeout` seconds for the server to be created. +// The initial `POST /servers` response does not contain the IP address, so we need to wait +// until the server is created. +func (api *API) CreateServerEx(request *ServerRequest, timeout int) (string, string, error) { + id, server, err := api.CreateServer(request) + if server != nil && err == nil { + count := timeout / 5 + if request.PowerOn { + err = api.WaitForState(server, "POWERED_ON", 5, count) + } else { + err = api.WaitForState(server, "POWERED_OFF", 5, count) + } + if err != nil { + return "", "", err + } + server, err := api.GetServer(id) + if server != nil && err == nil && server.Ips[0].Ip != "" { + if server.FirstPassword != "" { + return server.Ips[0].Ip, server.FirstPassword, nil + } + if request.Password != "" { + return server.Ips[0].Ip, request.Password, nil + } + // should never reach here + return "", "", errors.New("No server's password was found.") + } + } + return "", "", err +} + +// GET /servers/{id} +func (api *API) GetServer(server_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/fixed_instance_sizes +func (api *API) ListFixedInstanceSizes() ([]FixedInstanceInfo, error) { + result := []FixedInstanceInfo{} + url := createUrl(api, serverPathSegment, "fixed_instance_sizes") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// GET /servers/fixed_instance_sizes/{fixed_instance_size_id} +func (api *API) GetFixedInstanceSize(fis_id string) (*FixedInstanceInfo, error) { + result := new(FixedInstanceInfo) + url := createUrl(api, serverPathSegment, "fixed_instance_sizes", fis_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /servers/{id} +func (api *API) DeleteServer(server_id string, keep_ips bool) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id) + pm := make(map[string]interface{}, 1) + pm["keep_ips"] = keep_ips + url = appendQueryParams(url, pm) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// PUT /servers/{id} +func (api *API) RenameServer(server_id string, new_name string, new_desc string) (*Server, error) { + data := struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + }{Name: new_name, Description: new_desc} + result := new(Server) + url := createUrl(api, serverPathSegment, server_id) + err := api.Client.Put(url, &data, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{server_id}/hardware +func (api *API) GetServerHardware(server_id string) (*Hardware, error) { + result := new(Hardware) + url := createUrl(api, serverPathSegment, server_id, "hardware") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /servers/{server_id}/hardware +func (api *API) UpdateServerHardware(server_id string, hardware *Hardware) (*Server, error) { + var vc, cpp *int + var ram *float32 + if hardware.Vcores > 0 { + vc = new(int) + *vc = hardware.Vcores + } + if hardware.CoresPerProcessor > 0 { + cpp = new(int) + *cpp = hardware.CoresPerProcessor + } + if big.NewFloat(float64(hardware.Ram)).Cmp(big.NewFloat(0)) != 0 { + ram = new(float32) + *ram = hardware.Ram + } + req := struct { + VCores *int `json:"vcore,omitempty"` + Cpp *int `json:"cores_per_processor,omitempty"` + Ram *float32 `json:"ram,omitempty"` + Flavor string `json:"fixed_instance_size_id,omitempty"` + }{VCores: vc, Cpp: cpp, Ram: ram, Flavor: hardware.FixedInsSizeId} + + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "hardware") + err := api.Client.Put(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/hardware/hdds +func (api *API) ListServerHdds(server_id string) ([]Hdd, error) { + result := []Hdd{} + url := createUrl(api, serverPathSegment, server_id, "hardware/hdds") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /servers/{id}/hardware/hdds +func (api *API) AddServerHdds(server_id string, hdds *ServerHdds) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "hardware/hdds") + err := api.Client.Post(url, &hdds, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/hardware/hdds/{id} +func (api *API) GetServerHdd(server_id string, hdd_id string) (*Hdd, error) { + result := new(Hdd) + url := createUrl(api, serverPathSegment, server_id, "hardware/hdds", hdd_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /servers/{id}/hardware/hdds/{id} +func (api *API) DeleteServerHdd(server_id string, hdd_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "hardware/hdds", hdd_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// PUT /servers/{id}/hardware/hdds/{id} +func (api *API) ResizeServerHdd(server_id string, hdd_id string, new_size int) (*Server, error) { + data := Hdd{Size: new_size} + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "hardware/hdds", hdd_id) + err := api.Client.Put(url, &data, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/image +func (api *API) GetServerImage(server_id string) (*Identity, error) { + result := new(Identity) + url := createUrl(api, serverPathSegment, server_id, "image") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// PUT /servers/{id}/image +func (api *API) ReinstallServerImage(server_id string, image_id string, password string, fp_id string) (*Server, error) { + data := new(serverDeployImage) + data.Id = image_id + data.Password = password + if fp_id != "" { + fp := new(Identity) + fp.Id = fp_id + data.Firewall = fp + } + + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "image") + err := api.Client.Put(url, &data, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/ips +func (api *API) ListServerIps(server_id string) ([]ServerIp, error) { + result := []ServerIp{} + url := createUrl(api, serverPathSegment, server_id, "ips") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /servers/{id}/ips +func (api *API) AssignServerIp(server_id string, ip_type string) (*Server, error) { + data := typeField{Type: ip_type} + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "ips") + err := api.Client.Post(url, &data, &result, http.StatusCreated) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/ips/{id} +func (api *API) GetServerIp(server_id string, ip_id string) (*ServerIp, error) { + result := new(ServerIp) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /servers/{id}/ips/{id} +func (api *API) DeleteServerIp(server_id string, ip_id string, keep_ip bool) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id) + qm := make(map[string]interface{}, 1) + qm["keep_ip"] = keep_ip + url = appendQueryParams(url, qm) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /servers/{id}/status +func (api *API) GetServerStatus(server_id string) (*Status, error) { + result := new(Status) + url := createUrl(api, serverPathSegment, server_id, "status") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// PUT /servers/{id}/status/action (action = REBOOT) +func (api *API) RebootServer(server_id string, is_hardware bool) (*Server, error) { + result := new(Server) + request := ServerAction{} + request.Action = "REBOOT" + if is_hardware { + request.Method = "HARDWARE" + } else { + request.Method = "SOFTWARE" + } + url := createUrl(api, serverPathSegment, server_id, "status", "action") + err := api.Client.Put(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// PUT /servers/{id}/status/action (action = POWER_OFF) +func (api *API) ShutdownServer(server_id string, is_hardware bool) (*Server, error) { + result := new(Server) + request := ServerAction{} + request.Action = "POWER_OFF" + if is_hardware { + request.Method = "HARDWARE" + } else { + request.Method = "SOFTWARE" + } + url := createUrl(api, serverPathSegment, server_id, "status", "action") + err := api.Client.Put(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// PUT /servers/{id}/status/action (action = POWER_ON) +func (api *API) StartServer(server_id string) (*Server, error) { + result := new(Server) + request := ServerAction{} + request.Action = "POWER_ON" + url := createUrl(api, serverPathSegment, server_id, "status", "action") + err := api.Client.Put(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/dvd +func (api *API) GetServerDvd(server_id string) (*Identity, error) { + result := new(Identity) + url := createUrl(api, serverPathSegment, server_id, "dvd") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /servers/{id}/dvd +func (api *API) EjectServerDvd(server_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "dvd") + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// PUT /servers/{id}/dvd +func (api *API) LoadServerDvd(server_id string, dvd_id string) (*Server, error) { + request := Identity{} + request.Id = dvd_id + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "dvd") + err := api.Client.Put(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/private_networks +func (api *API) ListServerPrivateNetworks(server_id string) ([]Identity, error) { + result := []Identity{} + url := createUrl(api, serverPathSegment, server_id, "private_networks") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /servers/{id}/private_networks +func (api *API) AssignServerPrivateNetwork(server_id string, pn_id string) (*Server, error) { + req := new(Identity) + req.Id = pn_id + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "private_networks") + err := api.Client.Post(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/private_networks/{id} +func (api *API) GetServerPrivateNetwork(server_id string, pn_id string) (*PrivateNetwork, error) { + result := new(PrivateNetwork) + url := createUrl(api, serverPathSegment, server_id, "private_networks", pn_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /servers/{id}/private_networks/{id} +func (api *API) RemoveServerPrivateNetwork(server_id string, pn_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "private_networks", pn_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{server_id}/ips/{ip_id}/load_balancers +func (api *API) ListServerIpLoadBalancers(server_id string, ip_id string) ([]Identity, error) { + result := []Identity{} + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id, "load_balancers") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /servers/{server_id}/ips/{ip_id}/load_balancers +func (api *API) AssignServerIpLoadBalancer(server_id string, ip_id string, lb_id string) (*Server, error) { + req := struct { + LbId string `json:"load_balancer_id"` + }{lb_id} + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id, "load_balancers") + err := api.Client.Post(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// DELETE /servers/{server_id}/ips/{ip_id}/load_balancers +func (api *API) UnassignServerIpLoadBalancer(server_id string, ip_id string, lb_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id, "load_balancers", lb_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{server_id}/ips/{ip_id}/firewall_policy +func (api *API) GetServerIpFirewallPolicy(server_id string, ip_id string) (*Identity, error) { + result := new(Identity) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id, "firewall_policy") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// PUT /servers/{server_id}/ips/{ip_id}/firewall_policy +func (api *API) AssignServerIpFirewallPolicy(server_id string, ip_id string, fp_id string) (*Server, error) { + req := idField{fp_id} + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id, "firewall_policy") + err := api.Client.Put(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// DELETE /servers/{server_id}/ips/{ip_id}/firewall_policy +func (api *API) UnassignServerIpFirewallPolicy(server_id string, ip_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "ips", ip_id, "firewall_policy") + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// GET /servers/{id}/snapshots +func (api *API) GetServerSnapshot(server_id string) (*ServerSnapshot, error) { + result := new(ServerSnapshot) + url := createUrl(api, serverPathSegment, server_id, "snapshots") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /servers/{id}/snapshots +func (api *API) CreateServerSnapshot(server_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "snapshots") + err := api.Client.Post(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// PUT /servers/{server_id}/snapshots/{snapshot_id} +func (api *API) RestoreServerSnapshot(server_id string, snapshot_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "snapshots", snapshot_id) + err := api.Client.Put(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// DELETE /servers/{server_id}/snapshots/{snapshot_id} +func (api *API) DeleteServerSnapshot(server_id string, snapshot_id string) (*Server, error) { + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "snapshots", snapshot_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +// POST /servers/{server_id}/clone +func (api *API) CloneServer(server_id string, new_name string, datacenter_id string) (*Server, error) { + data := struct { + Name string `json:"name"` + DatacenterId string `json:"datacenter_id,omitempty"` + }{Name: new_name, DatacenterId: datacenter_id} + result := new(Server) + url := createUrl(api, serverPathSegment, server_id, "clone") + err := api.Client.Post(url, &data, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + result.decodeRaws() + return result, nil +} + +func (s *Server) GetState() (string, error) { + st, err := s.api.GetServerStatus(s.Id) + if st == nil { + return "", err + } + return st.State, err +} + +func (server *Server) decodeRaws() { + if server.AlertsRaw != nil { + server.Alerts = new(ServerAlerts) + var sad serverAlertDetails + if err := json.Unmarshal(*server.AlertsRaw, &sad); err == nil { + server.Alerts.AlertDetails = &sad + return + } + var sams []serverAlertSummary + if err := json.Unmarshal(*server.AlertsRaw, &sams); err == nil { + server.Alerts.AlertSummary = sams + } + } +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/setup.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/setup.go new file mode 100644 index 000000000..7d910c653 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/setup.go @@ -0,0 +1,19 @@ +package oneandone + +// The base url for 1&1 Cloud Server REST API. +var BaseUrl = "https://cloudpanel-api.1and1.com/v1" + +// Authentication token +var Token string + +// SetBaseUrl is intended to set the REST base url. BaseUrl is declared in setup.go +func SetBaseUrl(newbaseurl string) string { + BaseUrl = newbaseurl + return BaseUrl +} + +// SetToken is used to set authentication Token for the REST service. Token is declared in setup.go +func SetToken(newtoken string) string { + Token = newtoken + return Token +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/sharedstorages.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/sharedstorages.go new file mode 100644 index 000000000..fdb2a7bfd --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/sharedstorages.go @@ -0,0 +1,190 @@ +package oneandone + +import ( + "net/http" +) + +type SharedStorage struct { + Identity + descField + Size int `json:"size"` + MinSizeAllowed int `json:"minimum_size_allowed"` + SizeUsed string `json:"size_used,omitempty"` + State string `json:"state,omitempty"` + CloudPanelId string `json:"cloudpanel_id,omitempty"` + SiteId string `json:"site_id,omitempty"` + CifsPath string `json:"cifs_path,omitempty"` + NfsPath string `json:"nfs_path,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + Servers []SharedStorageServer `json:"servers,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + ApiPtr +} + +type SharedStorageServer struct { + Id string `json:"id,omitempty"` + Name string `json:"name,omitempty"` + Rights string `json:"rights,omitempty"` +} + +type SharedStorageRequest struct { + DatacenterId string `json:"datacenter_id,omitempty"` + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + Size *int `json:"size"` +} + +type SharedStorageAccess struct { + State string `json:"state,omitempty"` + KerberosContentFile string `json:"kerberos_content_file,omitempty"` + UserDomain string `json:"user_domain,omitempty"` + SiteId string `json:"site_id,omitempty"` + NeedsPasswordReset int `json:"needs_password_reset"` +} + +// GET /shared_storages +func (api *API) ListSharedStorages(args ...interface{}) ([]SharedStorage, error) { + url, err := processQueryParams(createUrl(api, sharedStoragePathSegment), args...) + if err != nil { + return nil, err + } + result := []SharedStorage{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /shared_storages +func (api *API) CreateSharedStorage(request *SharedStorageRequest) (string, *SharedStorage, error) { + result := new(SharedStorage) + url := createUrl(api, sharedStoragePathSegment) + err := api.Client.Post(url, request, &result, http.StatusAccepted) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /shared_storages/{id} +func (api *API) GetSharedStorage(ss_id string) (*SharedStorage, error) { + result := new(SharedStorage) + url := createUrl(api, sharedStoragePathSegment, ss_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /shared_storages/{id} +func (api *API) DeleteSharedStorage(ss_id string) (*SharedStorage, error) { + result := new(SharedStorage) + url := createUrl(api, sharedStoragePathSegment, ss_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /shared_storages/{id} +func (api *API) UpdateSharedStorage(ss_id string, request *SharedStorageRequest) (*SharedStorage, error) { + result := new(SharedStorage) + url := createUrl(api, sharedStoragePathSegment, ss_id) + err := api.Client.Put(url, &request, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /shared_storages/{id}/servers +func (api *API) ListSharedStorageServers(st_id string) ([]SharedStorageServer, error) { + result := []SharedStorageServer{} + url := createUrl(api, sharedStoragePathSegment, st_id, "servers") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /shared_storages/{id}/servers +func (api *API) AddSharedStorageServers(st_id string, servers []SharedStorageServer) (*SharedStorage, error) { + result := new(SharedStorage) + req := struct { + Servers []SharedStorageServer `json:"servers"` + }{servers} + url := createUrl(api, sharedStoragePathSegment, st_id, "servers") + err := api.Client.Post(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /shared_storages/{id}/servers/{id} +func (api *API) GetSharedStorageServer(st_id string, ser_id string) (*SharedStorageServer, error) { + result := new(SharedStorageServer) + url := createUrl(api, sharedStoragePathSegment, st_id, "servers", ser_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// DELETE /shared_storages/{id}/servers/{id} +func (api *API) DeleteSharedStorageServer(st_id string, ser_id string) (*SharedStorage, error) { + result := new(SharedStorage) + url := createUrl(api, sharedStoragePathSegment, st_id, "servers", ser_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /shared_storages/access +func (api *API) GetSharedStorageCredentials() ([]SharedStorageAccess, error) { + result := []SharedStorageAccess{} + url := createUrl(api, sharedStoragePathSegment, "access") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// PUT /shared_storages/access +func (api *API) UpdateSharedStorageCredentials(new_pass string) ([]SharedStorageAccess, error) { + result := []SharedStorageAccess{} + req := struct { + Password string `json:"password"` + }{new_pass} + url := createUrl(api, sharedStoragePathSegment, "access") + err := api.Client.Put(url, &req, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + return result, nil +} + +func (ss *SharedStorage) GetState() (string, error) { + in, err := ss.api.GetSharedStorage(ss.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/usages.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/usages.go new file mode 100644 index 000000000..e56c9f2ef --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/usages.go @@ -0,0 +1,52 @@ +package oneandone + +import ( + "net/http" + "time" +) + +type Usages struct { + Images []usage `json:"IMAGES,omitempty"` + LoadBalancers []usage `json:"LOAD BALANCERS,omitempty"` + PublicIPs []usage `json:"PUBLIC IP,omitempty"` + Servers []usage `json:"SERVERS,omitempty"` + SharedStorages []usage `json:"SHARED STORAGE,omitempty"` + ApiPtr +} + +type usage struct { + Identity + Site int `json:"site"` + Services []usageService `json:"services,omitempty"` +} + +type usageService struct { + AverageAmmount string `json:"avg_amount,omitempty"` + Unit string `json:"unit,omitempty"` + Usage int `json:"usage"` + Details []usageDetails `json:"detail,omitempty"` + typeField +} + +type usageDetails struct { + AverageAmmount string `json:"avg_amount,omitempty"` + StartDate string `json:"start_date,omitempty"` + EndDate string `json:"end_date,omitempty"` + Unit string `json:"unit,omitempty"` + Usage int `json:"usage,omitempty"` +} + +// GET /usages +func (api *API) ListUsages(period string, sd *time.Time, ed *time.Time, args ...interface{}) (*Usages, error) { + result := new(Usages) + url, err := processQueryParamsExt(createUrl(api, usagePathSegment), period, sd, ed, args...) + if err != nil { + return nil, err + } + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/users.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/users.go new file mode 100644 index 000000000..782d07a50 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/users.go @@ -0,0 +1,205 @@ +package oneandone + +import "net/http" + +type User struct { + Identity + descField + CreationDate string `json:"creation_date,omitempty"` + Email string `json:"email,omitempty"` + State string `json:"state,omitempty"` + Role *Identity `json:"role,omitempty"` + Api *UserApi `json:"api,omitempty"` + ApiPtr +} + +type UserApi struct { + Active bool `json:"active"` + AllowedIps []string `json:"allowed_ips,omitempty"` + UserApiKey + ApiPtr +} + +type UserApiKey struct { + Key string `json:"key,omitempty"` +} + +type UserRequest struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + Password string `json:"password,omitempty"` + Email string `json:"email,omitempty"` + State string `json:"state,omitempty"` +} + +// GET /users +func (api *API) ListUsers(args ...interface{}) ([]User, error) { + url, err := processQueryParams(createUrl(api, userPathSegment), args...) + if err != nil { + return nil, err + } + result := []User{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for index, _ := range result { + result[index].api = api + } + return result, nil +} + +// POST /users +func (api *API) CreateUser(user *UserRequest) (string, *User, error) { + result := new(User) + url := createUrl(api, userPathSegment) + err := api.Client.Post(url, &user, &result, http.StatusCreated) + if err != nil { + return "", nil, err + } + result.api = api + return result.Id, result, nil +} + +// GET /users/{id} +func (api *API) GetUser(user_id string) (*User, error) { + result := new(User) + url := createUrl(api, userPathSegment, user_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /users/{id} +func (api *API) DeleteUser(user_id string) (*User, error) { + result := new(User) + url := createUrl(api, userPathSegment, user_id) + err := api.Client.Delete(url, nil, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /users/{id} +func (api *API) ModifyUser(user_id string, user *UserRequest) (*User, error) { + result := new(User) + url := createUrl(api, userPathSegment, user_id) + err := api.Client.Put(url, &user, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /users/{id}/api +func (api *API) GetUserApi(user_id string) (*UserApi, error) { + result := new(UserApi) + url := createUrl(api, userPathSegment, user_id, "api") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /users/{id}/api +func (api *API) ModifyUserApi(user_id string, active bool) (*User, error) { + result := new(User) + req := struct { + Active bool `json:"active"` + }{active} + url := createUrl(api, userPathSegment, user_id, "api") + err := api.Client.Put(url, &req, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /users/{id}/api/key +func (api *API) GetUserApiKey(user_id string) (*UserApiKey, error) { + result := new(UserApiKey) + url := createUrl(api, userPathSegment, user_id, "api/key") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// PUT /users/{id}/api/key +func (api *API) RenewUserApiKey(user_id string) (*User, error) { + result := new(User) + url := createUrl(api, userPathSegment, user_id, "api/key") + err := api.Client.Put(url, nil, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /users/{id}/api/ips +func (api *API) ListUserApiAllowedIps(user_id string) ([]string, error) { + result := []string{} + url := createUrl(api, userPathSegment, user_id, "api/ips") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +// POST /users/{id}/api/ips +func (api *API) AddUserApiAlowedIps(user_id string, ips []string) (*User, error) { + result := new(User) + req := struct { + Ips []string `json:"ips"` + }{ips} + url := createUrl(api, userPathSegment, user_id, "api/ips") + err := api.Client.Post(url, &req, &result, http.StatusCreated) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /users/{id}/api/ips/{ip} +func (api *API) RemoveUserApiAllowedIp(user_id string, ip string) (*User, error) { + result := new(User) + url := createUrl(api, userPathSegment, user_id, "api/ips", ip) + err := api.Client.Delete(url, nil, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /users/{id}/api/ips +func (api *API) GetCurrentUserPermissions() (*Permissions, error) { + result := new(Permissions) + url := createUrl(api, userPathSegment, "current_user_permissions") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + return result, nil +} + +func (u *User) GetState() (string, error) { + in, err := u.api.GetUser(u.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/vpns.go b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/vpns.go new file mode 100644 index 000000000..723a85459 --- /dev/null +++ b/vendor/github.com/1and1/oneandone-cloudserver-sdk-go/vpns.go @@ -0,0 +1,114 @@ +package oneandone + +import "net/http" + +type VPN struct { + Identity + descField + typeField + CloudPanelId string `json:"cloudpanel_id,omitempty"` + CreationDate string `json:"creation_date,omitempty"` + State string `json:"state,omitempty"` + IPs []string `json:"ips,omitempty"` + Datacenter *Datacenter `json:"datacenter,omitempty"` + ApiPtr +} + +type configZipFile struct { + Base64String string `json:"config_zip_file"` +} + +// GET /vpns +func (api *API) ListVPNs(args ...interface{}) ([]VPN, error) { + url, err := processQueryParams(createUrl(api, vpnPathSegment), args...) + if err != nil { + return nil, err + } + result := []VPN{} + err = api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + for _, vpn := range result { + vpn.api = api + } + return result, nil +} + +// POST /vpns +func (api *API) CreateVPN(name string, description string, datacenter_id string) (string, *VPN, error) { + res := new(VPN) + url := createUrl(api, vpnPathSegment) + req := struct { + Name string `json:"name"` + Description string `json:"description,omitempty"` + DatacenterId string `json:"datacenter_id,omitempty"` + }{Name: name, Description: description, DatacenterId: datacenter_id} + err := api.Client.Post(url, &req, &res, http.StatusAccepted) + if err != nil { + return "", nil, err + } + res.api = api + return res.Id, res, nil +} + +// GET /vpns/{vpn_id} +func (api *API) GetVPN(vpn_id string) (*VPN, error) { + result := new(VPN) + url := createUrl(api, vpnPathSegment, vpn_id) + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// PUT /vpns/{vpn_id} +func (api *API) ModifyVPN(vpn_id string, name string, description string) (*VPN, error) { + result := new(VPN) + url := createUrl(api, vpnPathSegment, vpn_id) + req := struct { + Name string `json:"name,omitempty"` + Description string `json:"description,omitempty"` + }{Name: name, Description: description} + err := api.Client.Put(url, &req, &result, http.StatusOK) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// DELETE /vpns/{vpn_id} +func (api *API) DeleteVPN(vpn_id string) (*VPN, error) { + result := new(VPN) + url := createUrl(api, vpnPathSegment, vpn_id) + err := api.Client.Delete(url, nil, &result, http.StatusAccepted) + if err != nil { + return nil, err + } + result.api = api + return result, nil +} + +// GET /vpns/{vpn_id}/configuration_file +// Returns VPN configuration files (in a zip arhive) as a base64 encoded string +func (api *API) GetVPNConfigFile(vpn_id string) (string, error) { + result := new(configZipFile) + url := createUrl(api, vpnPathSegment, vpn_id, "configuration_file") + err := api.Client.Get(url, &result, http.StatusOK) + if err != nil { + return "", err + } + + return result.Base64String, nil +} + +func (vpn *VPN) GetState() (string, error) { + in, err := vpn.api.GetVPN(vpn.Id) + if in == nil { + return "", err + } + return in.State, err +} diff --git a/vendor/github.com/hashicorp/go-retryablehttp/client.go b/vendor/github.com/hashicorp/go-retryablehttp/client.go index d0ec6b2ab..198779bdf 100644 --- a/vendor/github.com/hashicorp/go-retryablehttp/client.go +++ b/vendor/github.com/hashicorp/go-retryablehttp/client.go @@ -32,8 +32,8 @@ import ( var ( // Default retry configuration defaultRetryWaitMin = 1 * time.Second - defaultRetryWaitMax = 5 * time.Minute - defaultRetryMax = 32 + defaultRetryWaitMax = 30 * time.Second + defaultRetryMax = 4 // defaultClient is used for performing requests without explicitly making // a new client. It is purposely private to avoid modifications. diff --git a/vendor/vendor.json b/vendor/vendor.json index f56d8a8bf..e180c00f2 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -14,6 +14,12 @@ "revision": "81b7822b1e798e8f17bf64b59512a5be4097e966", "revisionTime": "2017-01-18T16:13:56Z" }, + { + "checksumSHA1": "aABATU51PlDHfGeSe5cc9udwSXg=", + "path": "github.com/1and1/oneandone-cloudserver-sdk-go", + "revision": "5678f03fc801525df794f953aa82f5ad7555a2ef", + "revisionTime": "2016-08-11T22:04:02Z" + }, { "checksumSHA1": "N92Zji40JkCHAnsCNHTP4iKPz88=", "comment": "v2.1.1-beta-8-gca4d906", @@ -2017,10 +2023,10 @@ "revisionTime": "2017-02-17T16:27:05Z" }, { - "checksumSHA1": "GBDE1KDl/7c5hlRPYRZ7+C0WQ0g=", + "checksumSHA1": "ErJHGU6AVPZM9yoY/xV11TwSjQs=", "path": "github.com/hashicorp/go-retryablehttp", - "revision": "f4ed9b0fa01a2ac614afe7c897ed2e3d8208f3e8", - "revisionTime": "2016-08-10T17:22:55Z" + "revision": "6e85be8fee1dcaa02c0eaaac2df5a8fbecf94145", + "revisionTime": "2016-09-30T03:51:02Z" }, { "checksumSHA1": "A1PcINvF3UiwHRKn8UcgARgvGRs=", diff --git a/website/source/docs/commands/init.html.markdown b/website/source/docs/commands/init.html.markdown index 1222b46c7..57aeaed89 100644 --- a/website/source/docs/commands/init.html.markdown +++ b/website/source/docs/commands/init.html.markdown @@ -49,6 +49,9 @@ The command-line flags are all optional. The list of available flags are: for the backend. This can be specified multiple times. Flags specified later in the line override those specified earlier if they conflict. +* `-force-copy` - Suppress prompts about copying state data. This is equivalent + to providing a "yes" to all confirmation prompts. + * `-get=true` - Download any modules for this configuration. * `-input=true` - Ask for input interactively if necessary. If this is false @@ -60,8 +63,7 @@ The command-line flags are all optional. The list of available flags are: * `-no-color` - If specified, output won't contain any color. -* `-force-copy` - Suppress prompts about copying state data. This is equivalent - to providing a "yes" to all confirmation prompts. +* `-reconfigure` - Reconfigure the backend, ignoring any saved configuration. ## Backend Config diff --git a/website/source/docs/configuration/interpolation.html.md b/website/source/docs/configuration/interpolation.html.md index 1b4b69c5c..aef902d6e 100644 --- a/website/source/docs/configuration/interpolation.html.md +++ b/website/source/docs/configuration/interpolation.html.md @@ -179,6 +179,9 @@ The supported built-in functions are: * `coalesce(string1, string2, ...)` - Returns the first non-empty value from the given arguments. At least two arguments must be provided. + * `coalescelist(list1, list2, ...)` - Returns the first non-empty list from + the given arguments. At least two arguments must be provided. + * `compact(list)` - Removes empty string elements from a list. This can be useful in some cases, for example when passing joined lists as module variables or when parsing module outputs. @@ -208,6 +211,15 @@ The supported built-in functions are: module, you generally want to make the path relative to the module base, like this: `file("${path.module}/file")`. + * `matchkeys(values, keys, searchset)` - For two lists `values` and `keys` of + equal length, returns all elements from `values` where the corresponding + element from `keys` exists in the `searchset` list. E.g. + `matchkeys(aws_instance.example.*.id, + aws_instance.example.*.availability_zone, list("us-west-2a"))` will return a + list of the instance IDs of the `aws_instance.example` instances in + `"us-west-2a"`. No match will result in empty list. Items of `keys` are + processed sequentially, so the order of returned `values` is preserved. + * `floor(float)` - Returns the greatest integer value less than or equal to the argument. diff --git a/website/source/docs/providers/aws/d/ami.html.markdown b/website/source/docs/providers/aws/d/ami.html.markdown index a3dae6508..a91c5dbc2 100644 --- a/website/source/docs/providers/aws/d/ami.html.markdown +++ b/website/source/docs/providers/aws/d/ami.html.markdown @@ -59,7 +59,8 @@ options to narrow down the list AWS returns. ~> **NOTE:** If more or less than a single match is returned by the search, Terraform will fail. Ensure that your search is specific enough to return -a single AMI ID only, or use `most_recent` to choose the most recent one. +a single AMI ID only, or use `most_recent` to choose the most recent one. If +you want to match multiple AMIs, use the `aws_ami_ids` data source instead. ## Attributes Reference diff --git a/website/source/docs/providers/aws/d/ami_ids.html.markdown b/website/source/docs/providers/aws/d/ami_ids.html.markdown new file mode 100644 index 000000000..526977bdc --- /dev/null +++ b/website/source/docs/providers/aws/d/ami_ids.html.markdown @@ -0,0 +1,51 @@ +--- +layout: "aws" +page_title: "AWS: aws_ami_ids" +sidebar_current: "docs-aws-datasource-ami-ids" +description: |- + Provides a list of AMI IDs. +--- + +# aws\_ami_ids + +Use this data source to get a list of AMI IDs matching the specified criteria. + +## Example Usage + +```hcl +data "aws_ami_ids" "ubuntu" { + owners = ["099720109477"] + + filter { + name = "name" + values = ["ubuntu/images/ubuntu-*-*-amd64-server-*"] + } +} +``` + +## Argument Reference + +* `executable_users` - (Optional) Limit search to users with *explicit* launch +permission on the image. Valid items are the numeric account ID or `self`. + +* `filter` - (Optional) One or more name/value pairs to filter off of. There +are several valid keys, for a full reference, check out +[describe-images in the AWS CLI reference][1]. + +* `owners` - (Optional) Limit search to specific AMI owners. Valid items are +the numeric account ID, `amazon`, or `self`. + +* `name_regex` - (Optional) A regex string to apply to the AMI list returned +by AWS. This allows more advanced filtering not supported from the AWS API. +This filtering is done locally on what AWS returns, and could have a performance +impact if the result is large. It is recommended to combine this with other +options to narrow down the list AWS returns. + +~> **NOTE:** At least one of `executable_users`, `filter`, `owners` or +`name_regex` must be specified. + +## Attributes Reference + +`ids` is set to the list of AMI IDs. + +[1]: http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html diff --git a/website/source/docs/providers/aws/d/db_instance.html.markdown b/website/source/docs/providers/aws/d/db_instance.html.markdown index 01cfec43f..a7d3c0788 100644 --- a/website/source/docs/providers/aws/d/db_instance.html.markdown +++ b/website/source/docs/providers/aws/d/db_instance.html.markdown @@ -61,3 +61,4 @@ The following attributes are exported: * `storage_type` - Specifies the storage type associated with DB instance. * `timezone` - The time zone of the DB instance. * `vpc_security_groups` - Provides a list of VPC security group elements that the DB instance belongs to. +* `replicate_source_db` - The identifier of the source DB that this is a replica of. diff --git a/website/source/docs/providers/aws/d/ebs_snapshot_ids.html.markdown b/website/source/docs/providers/aws/d/ebs_snapshot_ids.html.markdown new file mode 100644 index 000000000..6d4ef617d --- /dev/null +++ b/website/source/docs/providers/aws/d/ebs_snapshot_ids.html.markdown @@ -0,0 +1,48 @@ +--- +layout: "aws" +page_title: "AWS: aws_ebs_snapshot_ids" +sidebar_current: "docs-aws-datasource-ebs-snapshot-ids" +description: |- + Provides a list of EBS snapshot IDs. +--- + +# aws\_ebs\_snapshot\_ids + +Use this data source to get a list of EBS Snapshot IDs matching the specified +criteria. + +## Example Usage + +```hcl +data "aws_ebs_snapshot_ids" "ebs_volumes" { + owners = ["self"] + + filter { + name = "volume-size" + values = ["40"] + } + + filter { + name = "tag:Name" + values = ["Example"] + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `owners` - (Optional) Returns the snapshots owned by the specified owner id. Multiple owners can be specified. + +* `restorable_by_user_ids` - (Optional) One or more AWS accounts IDs that can create volumes from the snapshot. + +* `filter` - (Optional) One or more name/value pairs to filter off of. There are +several valid keys, for a full reference, check out +[describe-volumes in the AWS CLI reference][1]. + +## Attributes Reference + +`ids` is set to the list of EBS snapshot IDs. + +[1]: http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-snapshots.html diff --git a/website/source/docs/providers/aws/d/subnet.html.markdown b/website/source/docs/providers/aws/d/subnet.html.markdown index 9e9b22aa7..ea412400c 100644 --- a/website/source/docs/providers/aws/d/subnet.html.markdown +++ b/website/source/docs/providers/aws/d/subnet.html.markdown @@ -50,6 +50,8 @@ subnet whose data will be exported as attributes. * `cidr_block` - (Optional) The cidr block of the desired subnet. +* `ipv6_cidr_block` - (Optional) The Ipv6 cidr block of the desired subnet + * `default_for_az` - (Optional) Boolean constraint for whether the desired subnet must be the default subnet for its associated availability zone. diff --git a/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown b/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown index b3ce05046..a631f2395 100644 --- a/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown +++ b/website/source/docs/providers/aws/r/api_gateway_deployment.html.markdown @@ -68,4 +68,9 @@ The following arguments are supported: The following attributes are exported: * `id` - The ID of the deployment +* `invoke_url` - The URL to invoke the API pointing to the stage, + e.g. `https://z4675bid1j.execute-api.eu-west-2.amazonaws.com/prod` +* `execution_arn` - The execution ARN to be used in [`lambda_permission`](/docs/providers/aws/r/lambda_permission.html)'s `source_arn` + when allowing API Gateway to invoke a Lambda function, + e.g. `arn:aws:execute-api:eu-west-2:123456789012:z4675bid1j/prod` * `created_date` - The creation date of the deployment diff --git a/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown b/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown index ee8b70c1f..e55701523 100644 --- a/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown +++ b/website/source/docs/providers/aws/r/api_gateway_usage_plan.html.markdown @@ -72,8 +72,8 @@ The API Gateway Usage Plan argument layout is a structure composed of several su #### Api Stages arguments - * `api_id` (Optional) - API Id of the associated API stage in a usage plan. - * `stage` (Optional) - API stage name of the associated API stage in a usage plan. + * `api_id` (Required) - API Id of the associated API stage in a usage plan. + * `stage` (Required) - API stage name of the associated API stage in a usage plan. #### Quota Settings Arguments diff --git a/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown b/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown index cfb25c0d0..8f7b7db8e 100644 --- a/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown +++ b/website/source/docs/providers/aws/r/cloudfront_distribution.html.markdown @@ -368,6 +368,8 @@ The following attributes are exported: * `id` - The identifier for the distribution. For example: `EDFDVBD632BHDS5`. + * `arn` - The ARN (Amazon Resource Name) for the distribution. For example: arn:aws:cloudfront::123456789012:distribution/EDFDVBD632BHDS5, where 123456789012 is your AWS account ID. + * `caller_reference` - Internal value used by CloudFront to allow future updates to the distribution configuration. diff --git a/website/source/docs/providers/aws/r/cognito_identity_pool.markdown b/website/source/docs/providers/aws/r/cognito_identity_pool.markdown new file mode 100644 index 000000000..5dfe696b6 --- /dev/null +++ b/website/source/docs/providers/aws/r/cognito_identity_pool.markdown @@ -0,0 +1,78 @@ +--- +layout: "aws" +page_title: "AWS: aws_cognito_identity_pool" +sidebar_current: "docs-aws-resource-cognito-identity-pool" +description: |- + Provides an AWS Cognito Identity Pool. +--- + +# aws\_cognito\_identity\_pool + +Provides an AWS Cognito Identity Pool. + +## Example Usage + +``` +resource "aws_iam_saml_provider" "default" { + name = "my-saml-provider" + saml_metadata_document = "${file("saml-metadata.xml")}" +} + +resource "aws_cognito_identity_pool" "main" { + identity_pool_name = "identity pool" + allow_unauthenticated_identities = false + + cognito_identity_providers { + client_id = "6lhlkkfbfb4q5kpp90urffae" + provider_name = "cognito-idp.us-east-1.amazonaws.com/us-east-1_Tv0493apJ" + server_side_token_check = false + } + + cognito_identity_providers { + client_id = "7kodkvfqfb4qfkp39eurffae" + provider_name = "cognito-idp.us-east-1.amazonaws.com/eu-west-1_Zr231apJu" + server_side_token_check = false + } + + supported_login_providers { + "graph.facebook.com" = "7346241598935552" + "accounts.google.com" = "123456789012.apps.googleusercontent.com" + } + + saml_provider_arns = ["${aws_iam_saml_provider.default.arn}"] + openid_connect_provider_arns = ["arn:aws:iam::123456789012:oidc-provider/foo.example.com"] +} +``` + +## Argument Reference + +The Cognito Identity Pool argument layout is a structure composed of several sub-resources - these resources are laid out below. + +* `identity_pool_name` (Required) - The Cognito Identity Pool name. +* `allow_unauthenticated_identities` (Required) - Whether the identity pool supports unauthenticated logins or not. +* `developer_provider_name` (Optional) - The "domain" by which Cognito will refer to your users. This name acts as a placeholder that allows your +backend and the Cognito service to communicate about the developer provider. +* `cognito_identity_providers` (Optional) - An array of [Amazon Cognito Identity user pools](#cognito-identity-providers) and their client IDs. +* `openid_connect_provider_arns` (Optional) - A list of OpendID Connect provider ARNs. +* `saml_provider_arns` (Optional) - An array of Amazon Resource Names (ARNs) of the SAML provider for your identity. +* `supported_login_providers` (Optional) - Key-Value pairs mapping provider names to provider app IDs. + +#### Cognito Identity Providers + + * `client_id` (Optional) - The client ID for the Amazon Cognito Identity User Pool. + * `provider_name` (Optional) - The provider name for an Amazon Cognito Identity User Pool. + * `server_side_token_check` (Optional) - Whether server-side token validation is enabled for the identity provider’s token or not. + +## Attributes Reference + +In addition to the arguments, which are exported, the following attributes are exported: + +* `id` - An identity pool ID in the format REGION:GUID. + +## Import + +Cognito Identity Pool can be imported using the name, e.g. + +``` +$ terraform import aws_cognito_identity_pool.mypool +``` diff --git a/website/source/docs/providers/aws/r/iam_role_policy_attachment.markdown b/website/source/docs/providers/aws/r/iam_role_policy_attachment.markdown index f3d826d81..fddc880a1 100644 --- a/website/source/docs/providers/aws/r/iam_role_policy_attachment.markdown +++ b/website/source/docs/providers/aws/r/iam_role_policy_attachment.markdown @@ -13,12 +13,40 @@ Attaches a Managed IAM Policy to an IAM role ```hcl resource "aws_iam_role" "role" { name = "test-role" + assume_role_policy = <s`) +* `response_condition` - (Optional) Name of already defined `condition` to apply. This `condition` must be of type `RESPONSE`. For detailed information about Conditionals, see [Fastly's Documentation on Conditionals][fastly-conditionals]. + The `response_object` block supports: * `name` - (Required) A unique name to identify this Response Object. @@ -369,3 +387,4 @@ Service. [fastly-cname]: https://docs.fastly.com/guides/basic-setup/adding-cname-records [fastly-conditionals]: https://docs.fastly.com/guides/conditions/using-conditions [fastly-sumologic]: https://docs.fastly.com/api/logging#logging_sumologic +[fastly-gcs]: https://docs.fastly.com/api/logging#logging_gcs diff --git a/website/source/docs/providers/google/r/compute_network.html.markdown b/website/source/docs/providers/google/r/compute_network.html.markdown index 0146dc8a9..a93136880 100644 --- a/website/source/docs/providers/google/r/compute_network.html.markdown +++ b/website/source/docs/providers/google/r/compute_network.html.markdown @@ -56,3 +56,12 @@ exported: * `name` - The unique name of the network. * `self_link` - The URI of the created resource. + + +## Import + +Networks can be imported using the `name`, e.g. + +``` +$ terraform import google_compute_network.public my_network_name +``` \ No newline at end of file diff --git a/website/source/docs/providers/heroku/r/app.html.markdown b/website/source/docs/providers/heroku/r/app.html.markdown index c2776e531..0cf064e4f 100644 --- a/website/source/docs/providers/heroku/r/app.html.markdown +++ b/website/source/docs/providers/heroku/r/app.html.markdown @@ -22,6 +22,10 @@ resource "heroku_app" "default" { config_vars { FOOBAR = "baz" } + + buildpacks = [ + "heroku/go" + ] } ``` @@ -34,6 +38,8 @@ The following arguments are supported: * `region` - (Required) The region that the app should be deployed in. * `stack` - (Optional) The application stack is what platform to run the application in. +* `buildpacks` - (Optional) Buildpack names or URLs for the application. + Buildpacks configured externally won't be altered if this is not present. * `config_vars` - (Optional) Configuration variables for the application. The config variables in this map are not the final set of configuration variables, but rather variables you want present. That is, other diff --git a/website/source/docs/providers/ns1/r/record.html.markdown b/website/source/docs/providers/ns1/r/record.html.markdown index fb03a78f5..9b7148b84 100644 --- a/website/source/docs/providers/ns1/r/record.html.markdown +++ b/website/source/docs/providers/ns1/r/record.html.markdown @@ -51,16 +51,38 @@ The following arguments are supported: * `ttl` - (Optional) The records' time to live. * `link` - (Optional) The target record to link to. This means this record is a 'linked' record, and it inherits all properties from its target. * `use_client_subnet` - (Optional) Whether to use EDNS client subnet data when available(in filter chain). -* `answers` - (Optional) The list of the RDATA fields for the records' specified type. Answers are documented below. -* `filters` - (Optional) The list of NS1 filters for the record(order matters). Filters are documented below. +* `answers` - (Optional) One or more NS1 answers for the records' specified type. Answers are documented below. +* `filters` - (Optional) One or more NS1 filters for the record(order matters). Filters are documented below. Answers (`answers`) support the following: -* `answer` - (Required) List of RDATA fields. -* `region` - (Required) The region this answer belongs to. +* `answer` - (Required) Space delimited string of RDATA fields dependent on the record type. + + A: + + answer = "1.2.3.4" + + CNAME: + + answer = "www.example.com" + + MX: + + answer = "5 mail.example.com" + + SRV: + + answer = "10 0 2380 node-1.example.com" + + SPF: + + answer = "v=DKIM1; k=rsa; p=XXXXXXXX" + + +* `region` - (Optional) The region(or group) name that this answer belongs to. Filters (`filters`) support the following: * `filter` - (Required) The type of filter. -* `disabled` - (Required) Determines whether the filter is applied in the filter chain. -* `config` - (Required) The filters' configuration. Simple key/value pairs determined by the filter type. +* `disabled` - (Optional) Determines whether the filter is applied in the filter chain. +* `config` - (Optional) The filters' configuration. Simple key/value pairs determined by the filter type. diff --git a/website/source/docs/providers/oneandone/index.html.markdown b/website/source/docs/providers/oneandone/index.html.markdown new file mode 100644 index 000000000..7b4c6e2b9 --- /dev/null +++ b/website/source/docs/providers/oneandone/index.html.markdown @@ -0,0 +1,54 @@ +--- +layout: "oneandone" +page_title: "Provider: 1&1" +sidebar_current: "docs-oneandone-index" +description: |- + A provider for 1&1. +--- + +# 1&1 Provider + +The 1&1 provider gives the ability to deploy and configure resources using the 1&1 Cloud Server API. + +Use the navigation to the left to read about the available resources. + + +## Usage + +The provider needs to be configured with proper credentials before it can be used. + + +```text +$ export ONEANDONE_TOKEN="oneandone_token" +``` + +Or you can provide your credentials like this: + + +The credentials provided in `.tf` file will override credentials in the environment variables. + +## Example Usage + + +```hcl +provider "oneandone"{ + token = "oneandone_token" + endpoint = "oneandone_endpoint" + retries = 100 +} + +resource "oneandone_server" "server" { + # ... +} +``` + + +## Configuration Reference + +The following arguments are supported: + +* `token` - (Required) If omitted, the `ONEANDONE_TOKEN` environment variable is used. + +* `endpoint` - (Optional) + +* `retries` - (Optional) Number of retries while waiting for a resource to be provisioned. Default value is 50. diff --git a/website/source/docs/providers/oneandone/r/firewall_policy.html.markdown b/website/source/docs/providers/oneandone/r/firewall_policy.html.markdown new file mode 100644 index 000000000..c5e2eb32f --- /dev/null +++ b/website/source/docs/providers/oneandone/r/firewall_policy.html.markdown @@ -0,0 +1,58 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_firewall_policy" +sidebar_current: "docs-oneandone-resource-firewall-policy" +description: |- + Creates and manages 1&1 Firewall Policy. +--- + +# oneandone\_server + +Manages a Firewall Policy on 1&1 + +## Example Usage + +```hcl +resource "oneandone_firewall_policy" "fw" { + name = "test_fw_011" + rules = [ + { + "protocol" = "TCP" + "port_from" = 80 + "port_to" = 80 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "ICMP" + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 43 + "port_to" = 43 + "source_ip" = "0.0.0.0" + }, + { + "protocol" = "TCP" + "port_from" = 22 + "port_to" = 22 + "source_ip" = "0.0.0.0" + } + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `description` - (Optional) [string] Description for the VPN +* `name` - (Required) [string] The name of the VPN. + +Firewall Policy Rules (`rules`) support the follwing: + +* `protocol` - (Required) [String] The protocol for the rule ["TCP", "UDP", "TCP/UDP", "ICMP", "IPSEC"] +* `port_from` - (Optional) [String] Defines the start range of the allowed port +* `port_to` - (Optional) [String] Defines the end range of the allowed port +* `source_ip` - (Optional) [String] Only traffic directed to the respective IP address + diff --git a/website/source/docs/providers/oneandone/r/loadbalancer.html.markdown b/website/source/docs/providers/oneandone/r/loadbalancer.html.markdown new file mode 100644 index 000000000..35dc4e7e2 --- /dev/null +++ b/website/source/docs/providers/oneandone/r/loadbalancer.html.markdown @@ -0,0 +1,61 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_loadbalancer" +sidebar_current: "docs-oneandone-resource-loadbalancer" +description: |- + Creates and manages 1&1 Load Balancer. +--- + +# oneandone\_server + +Manages a Load Balancer on 1&1 + +## Example Usage + +```hcl +resource "oneandone_loadbalancer" "lb" { + name = "test_lb" + method = "ROUND_ROBIN" + persistence = true + persistence_time = 60 + health_check_test = "TCP" + health_check_interval = 300 + datacenter = "GB" + rules = [ + { + protocol = "TCP" + port_balancer = 8080 + port_server = 8089 + source_ip = "0.0.0.0" + }, + { + protocol = "TCP" + port_balancer = 9090 + port_server = 9099 + source_ip = "0.0.0.0" + } + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) [String] The name of the load balancer. +* `description` - (Optional) [String] Description for the load balancer +* `method` - (Required) [String] Balancing procedure ["ROUND_ROBIN", "LEAST_CONNECTIONS"] +* `datacenter` - (Optional) [String] Location of desired 1and1 datacenter ["DE", "GB", "US", "ES" ] +* `persistence` - (Optional) [Boolean] True/false defines whether persistence should be turned on/off +* `persistence_time` - (Optional) [Integer] Persistance duration in seconds +* `health_check_test` - (Optional) [String] ["TCP", "ICMP"] +* `health_check_test_interval` - (Optional) [String] +* `health_check_test_path` - (Optional) [String] +* `health_check_test_parser` - (Optional) [String] + +Loadbalancer rules (`rules`) support the following + +* `protocol` - (Required) [String] The protocol for the rule ["TCP", "UDP", "TCP/UDP", "ICMP", "IPSEC"] +* `port_balancer` - (Required) [String] +* `port_server` - (Required) [String] +* `source_ip` - (Required) [String] diff --git a/website/source/docs/providers/oneandone/r/monitoring_policy.html.markdown b/website/source/docs/providers/oneandone/r/monitoring_policy.html.markdown new file mode 100644 index 000000000..71e0f5120 --- /dev/null +++ b/website/source/docs/providers/oneandone/r/monitoring_policy.html.markdown @@ -0,0 +1,177 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_monitoring_policy" +sidebar_current: "docs-oneandone-resource-monitoring-policy" +description: |- + Creates and manages 1&1 Monitoring Policy. +--- + +# oneandone\_server + +Manages a Monitoring Policy on 1&1 + +## Example Usage + +```hcl +resource "oneandone_monitoring_policy" "mp" { + name = "test_mp" + agent = true + email = "jasmin@stackpointcloud.com" + + thresholds = { + cpu = { + warning = { + value = 50, + alert = false + } + critical = { + value = 66, + alert = false + } + + } + ram = { + warning = { + value = 70, + alert = true + } + critical = { + value = 80, + alert = true + } + }, + ram = { + warning = { + value = 85, + alert = true + } + critical = { + value = 95, + alert = true + } + }, + disk = { + warning = { + value = 84, + alert = true + } + critical = { + value = 94, + alert = true + } + }, + transfer = { + warning = { + value = 1000, + alert = true + } + critical = { + value = 2000, + alert = true + } + }, + internal_ping = { + warning = { + value = 3000, + alert = true + } + critical = { + value = 4000, + alert = true + } + } + } + ports = [ + { + email_notification = true + port = 443 + protocol = "TCP" + alert_if = "NOT_RESPONDING" + }, + { + email_notification = false + port = 80 + protocol = "TCP" + alert_if = "NOT_RESPONDING" + }, + { + email_notification = true + port = 21 + protocol = "TCP" + alert_if = "NOT_RESPONDING" + } + ] + + processes = [ + { + email_notification = false + process = "httpdeamon" + alert_if = "RUNNING" + }, + { + process = "iexplorer", + alert_if = "NOT_RUNNING" + email_notification = true + }] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) [string] The name of the VPN. +* `description` - (Optional) [string] Description for the VPN +* `email` - (Optional) [String] Email address to which notifications monitoring system will send +* `agent- (Required)[Boolean] Indicates which monitoring type will be used. True: To use this monitoring type, you must install an agent on the server. False: Monitor a server without installing an agent. Note: If you do not install an agent, you cannot retrieve information such as free hard disk space or ongoing processes. + +Monitoring Policy Thresholds (`thresholds`) support the following: + +* `cpu - (Required)[Type] CPU thresholds + * `warning - (Required)[Type] Warning alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. + * `critical - (Required)[Type] Critical alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. +* `ram - (Required)[Type] RAM threshold + * `warning - (Required)[Type] Warning alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. + * `critical - (Required)[Type] Critical alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. +* `disk - (Required)[Type] Hard Disk threshold + * `warning - (Required)[Type] Warning alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. + * `critical - (Required)[Type] Critical alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. +* `transfer - (Required)[Type] Data transfer threshold + * `warning - (Required)[Type] Warning alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. + * `critical - (Required)[Type] Critical alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. +* `internal_ping - (Required)[type] Ping threshold + * `warning - (Required)[Type] Warning alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. + * `critical - (Required)[Type] Critical alert + * `value - (Required)[Integer] Warning to be issued when the threshold is reached. from 1 to 100 + * `alert - (Required)[Boolean] If set true warning will be issued. + +Monitoring Policy Ports (`ports`) support the following: + +* `email_notification - (Required)[boolean] If set true email will be sent. +* `port - (Required)[Integer] Port number. +* `protocol - (Required)[String] The protocol of the port ["TCP", "UDP", "TCP/UDP", "ICMP", "IPSEC"] +* `alert_if - (Required)[String] Condition for the alert to be issued. + +Monitoring Policy Ports (`processes`) support the following: + +* `email_notification - (Required)[Boolean] If set true email will be sent. +* `process - (Required)[Integer] Process name. +* `alert_if - (Required)[String] Condition for the alert to be issued. diff --git a/website/source/docs/providers/oneandone/r/private_network.html.markdown b/website/source/docs/providers/oneandone/r/private_network.html.markdown new file mode 100644 index 000000000..1ea270356 --- /dev/null +++ b/website/source/docs/providers/oneandone/r/private_network.html.markdown @@ -0,0 +1,38 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_private_network" +sidebar_current: "docs-oneandone-resource-private-network" +description: |- + Creates and manages 1&1 Private Network. +--- + +# oneandone\_server + +Manages a Private Network on 1&1 + +## Example Usage + +```hcl +resource "oneandone_private_network" "pn" { + name = "pn_test", + description = "new stuff001" + datacenter = "GB" + network_address = "192.168.7.0" + subnet_mask = "255.255.255.0" + server_ids = [ + "${oneandone_server.server.id}", + "${oneandone_server.server02.id}", + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `datacenter` - (Optional)[string] Location of desired 1and1 datacenter ["DE", "GB", "US", "ES" ] +* `description` - (Optional)[string] Description for the shared storage +* `name` - (Required)[string] The name of the private network +* `network_address` - (Optional)[string] Network address for the private network +* `subnet_mask` - (Optional)[string] Subnet mask for the private network +* `server_ids` (Optional)[Collection] List of servers that are to be associated with the private network diff --git a/website/source/docs/providers/oneandone/r/public_ip.html.markdown b/website/source/docs/providers/oneandone/r/public_ip.html.markdown new file mode 100644 index 000000000..7fd01cd74 --- /dev/null +++ b/website/source/docs/providers/oneandone/r/public_ip.html.markdown @@ -0,0 +1,29 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_public_ip" +sidebar_current: "docs-oneandone-resource-public-ip" +description: |- + Creates and manages 1&1 Public IP. +--- + +# oneandone\_vpn + +Manages a Public IP on 1&1 + +## Example Usage + +```hcl +resource "oneandone_vpn" "vpn" { + datacenter = "GB" + name = "test_vpn_01" + description = "ttest descr" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `datacenter` - (Optional)[string] Location of desired 1and1 datacenter ["DE", "GB", "US", "ES" ] +* `description` - (Optional)[string] Description of the VPN +* `name` -(Required)[string] The name of the VPN. diff --git a/website/source/docs/providers/oneandone/r/server.html.markdown b/website/source/docs/providers/oneandone/r/server.html.markdown new file mode 100644 index 000000000..5a5f88b73 --- /dev/null +++ b/website/source/docs/providers/oneandone/r/server.html.markdown @@ -0,0 +1,60 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_server" +sidebar_current: "docs-oneandone-resource-server" +description: |- + Creates and manages 1&1 Server. +--- + +# oneandone\_server + +Manages a Server on 1&1 + +## Example Usage + +```hcl +resource "oneandone_server" "server" { + name = "Example" + description = "Terraform 1and1 tutorial" + image = "ubuntu" + datacenter = "GB" + vcores = 1 + cores_per_processor = 1 + ram = 2 + ssh_key_path = "/path/to/prvate/ssh_key" + hdds = [ + { + disk_size = 60 + is_main = true + } + ] + + provisioner "remote-exec" { + inline = [ + "apt-get update", + "apt-get -y install nginx", + ] + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `cores_per_processor` -(Required)[integer] Number of cores per processor +* `datacenter` - (Optional)[string] Location of desired 1and1 datacenter ["DE", "GB", "US", "ES" ] +* `description` - (Optional)[string] Description of the server +* `firewall_policy_id` - (Optional)[string] ID of firewall policy +* `hdds` - (Required)[collection] List of HDDs. One HDD must be main. +* `*disk_size` -(Required)[integer] The size of HDD +* `*is_main` - (Optional)[boolean] Indicates if HDD is to be used as main hard disk of the server +* `image` -(Required)[string] The name of a desired image to be provisioned with the server +* `ip` - (Optional)[string] IP address for the server +* `loadbalancer_id` - (Optional)[string] ID of the load balancer +* `monitoring_policy_id` - (Optional)[string] ID of monitoring policy +* `name` -(Required)[string] The name of the server. +* `password` - (Optional)[string] Desired password. +* `ram` -(Required)[float] Size of ram. +* `ssh_key_path` - (Optional)[string] Path to private ssh key +* `vcores` -(Required)[integer] Number of virtual cores. diff --git a/website/source/docs/providers/oneandone/r/shared_storage.html.markdown b/website/source/docs/providers/oneandone/r/shared_storage.html.markdown new file mode 100644 index 000000000..0c7730b9b --- /dev/null +++ b/website/source/docs/providers/oneandone/r/shared_storage.html.markdown @@ -0,0 +1,43 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_shared_storage" +sidebar_current: "docs-oneandone-resource-shared-storage" +description: |- + Creates and manages 1&1 Shared Storage. +--- + +# oneandone\_server + +Manages a Shared Storage on 1&1 + +## Example Usage + +```hcl +resource "oneandone_shared_storage" "storage" { + name = "test_storage1" + description = "1234" + size = 50 + + storage_servers = [ + { + id = "${oneandone_server.server.id}" + rights = "RW" + }, + { + id = "${oneandone_server.server02.id}" + rights = "RW" + } + ] +} +``` + +## Argument Reference + +The following arguments are supported: + +* `datacenter` - (Optional)[string] Location of desired 1and1 datacenter ["DE", "GB", "US", "ES" ] +* `description` - (Optional)[string] Description for the shared storage +* `size` - (Required)[string] Size of the shared storage +* `storage_servers` (Optional)[Collection] List of servers that will have access to the stored storage + * `id` - (Required) [string] ID of the server + * `rights` - (Required)[string] Access rights to be assigned to the server ["RW","R"] diff --git a/website/source/docs/providers/oneandone/r/vpn.html.markdown b/website/source/docs/providers/oneandone/r/vpn.html.markdown new file mode 100644 index 000000000..7f9aecf78 --- /dev/null +++ b/website/source/docs/providers/oneandone/r/vpn.html.markdown @@ -0,0 +1,30 @@ +--- +layout: "oneandone" +page_title: "1&1: oneandone_vpn" +sidebar_current: "docs-oneandone-resource-vpn" +description: |- + Creates and manages 1&1 VPN. +--- + +# oneandone\_vpn + +Manages a VPN on 1&1 + +## Example Usage + +```hcl +resource "oneandone_public_ip" "ip" { + "ip_type" = "IPV4" + "reverse_dns" = "test.1and1.com" + "datacenter" = "GB" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `datacenter` - (Optional)[string] Location of desired 1and1 datacenter ["DE", "GB", "US", "ES" ] +* `ip_type` - (Required)[string] IPV4 or IPV6 +* `reverese_dns` - [Optional](string) + diff --git a/website/source/docs/providers/opc/r/opc_compute_security_list.html.markdown b/website/source/docs/providers/opc/r/opc_compute_security_list.html.markdown index ea92fc8c3..a7b84e692 100644 --- a/website/source/docs/providers/opc/r/opc_compute_security_list.html.markdown +++ b/website/source/docs/providers/opc/r/opc_compute_security_list.html.markdown @@ -6,7 +6,7 @@ description: |- Creates and manages a security list in an OPC identity domain. --- -# opc\_compute\_ip\_reservation +# opc\_compute\_security\_list The ``opc_compute_security_list`` resource creates and manages a security list in an OPC identity domain. diff --git a/website/source/docs/providers/random/r/pet.html.md b/website/source/docs/providers/random/r/pet.html.md index 56068237b..9e464aca5 100644 --- a/website/source/docs/providers/random/r/pet.html.md +++ b/website/source/docs/providers/random/r/pet.html.md @@ -58,3 +58,9 @@ The following arguments are supported: * `prefix` - (Optional) A string to prefix the name with. * `separator` - (Optional) The character to separate words in the pet name. + +## Attribute Reference + +The following attributes are supported: + +* `id` - (string) The random pet name diff --git a/website/source/docs/state/environments.html.md b/website/source/docs/state/environments.html.md index e6fa12a9a..b7f3e2a3c 100644 --- a/website/source/docs/state/environments.html.md +++ b/website/source/docs/state/environments.html.md @@ -119,7 +119,7 @@ aren't any more complex than that. Terraform wraps this simple notion with a set of protections and support for remote state. For local state, Terraform stores the state environments in a folder -`terraform.state.d`. This folder should be committed to version control +`terraform.tfstate.d`. This folder should be committed to version control (just like local-only `terraform.tfstate`). For [remote state](/docs/state/remote.html), the environments are stored diff --git a/website/source/layouts/aws.erb b/website/source/layouts/aws.erb index 28ff4dfc1..6cbb731b8 100644 --- a/website/source/layouts/aws.erb +++ b/website/source/layouts/aws.erb @@ -203,19 +203,18 @@ - > - App Autoscaling Resources - + > CloudFormation Resources @@ -346,6 +345,15 @@ + > + Cognito Resources + + + > Config Resources