removing a PostgreSQL role.
Add manual overrides if this isn't the desired behavior, but it should
universally be the desired outcome except when a ROLE name is reused
across multiple databases in the same PostgreSQL cluster, in which case
the `skip_drop_role` is necessary for all but the last PostgreSQL
provider.
We just check these for syntax rather than exact value because the
underlying archive library is free to evolve its exact encoding over time
as long as the result is semantically equivalent, and we don't want these
tests to break when we build on different versions of Go.
A new create_timeout attribute was added that had some backwards
incompatibilities, and as per discussion in #10823, it was determined we
could make upgrading to 0.8.x easier by fixing them, without really
losing any functionality.
Because create_timeout is not something stored or transmitted to the
API, it's not something we need a ForceNew on. Also, because an update
wouldn't result in an API call, we can add a state migration to avoid a
false positive diff that requires people to plan and apply but doesn't
actually make an API call.
The subsets for backend address pools and inbound nat rules weren't being hashed
properly as part of the ip_configuration hash which caused multiple
ip_configurations to be expanded and sent to the API with conflicting names
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMNetworkInterface -timeout 120m
=== RUN TestAccAzureRMNetworkInterface_basic
--- PASS: TestAccAzureRMNetworkInterface_basic (160.24s)
=== RUN TestAccAzureRMNetworkInterface_disappears
--- PASS: TestAccAzureRMNetworkInterface_disappears (157.00s)
=== RUN TestAccAzureRMNetworkInterface_enableIPForwarding
--- PASS: TestAccAzureRMNetworkInterface_enableIPForwarding (156.86s)
=== RUN TestAccAzureRMNetworkInterface_multipleLoadBalancers
--- PASS: TestAccAzureRMNetworkInterface_multipleLoadBalancers (185.87s)
=== RUN TestAccAzureRMNetworkInterface_withTags
--- PASS: TestAccAzureRMNetworkInterface_withTags (1212.92s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 1872.960s
This commit adds some basic tests for block device functionality. It also
expands the existing block device documentation as well references the
new "volume attach" resources for further block storage functionality.
This commit makes the openstack_blockstorage_volume resources better able
to handle volume creation errors upon resource creation. The cause of this
change is because there could be some storage backend error that happens
during storage provisioning that won't manifest in an "err" but will set
the volume's status to "error". We now check for a status of "error" and
propagate the error up the stack.
* Implementing Redis Cache
* Properties should never be nil
* Updating the SDK to 7.0.1
* Redis Cache updated for SDK 7.0.1
* Fixing the max memory validation tests
* Cleaning up
* Adding tests for Standard with Tags
* Int's -> Strings for the moment
* Making the RedisConfiguration object mandatory
* Only parse out redis configuration values if they're set
* Updating the RedisConfiguration object to be required in the documentaqtion
* Adding Tags to the Standard tests / importing excluding the redisConfiguration
* Removing support for import for Redis Cache for now
* Removed a scaling test
* Drop alias from state file if missing from lambda.
This commit fixes an issue where if you remove a AWS Lambda, the corresponding alias for that Lambda is also deleted.
* Added missing imports.
* Removed non-local reference to constant.
* vendor: update github.com/Ensighten/udnssdk to v1.2.1
* ultradns_tcpool: add
* ultradns.baseurl: set default
* ultradns.record: cleanup test
* ultradns_record: extract common, cleanup
* ultradns: extract common
* ultradns_dirpool: add
* ultradns_dirpool: fix rdata.ip_info.ips to be idempotent
* ultradns_tcpool: add doc
* ultradns_dirpool: fix rdata.geo_codes.codes to be idempotent
* ultradns_dirpool: add doc
* ultradns: cleanup testing
* ultradns_record: rename resource
* ultradns: log username from config, not client
udnssdk.Client is being refactored to use x/oauth2, so don't assume we
can access Username from it
* ultradns_probe_ping: add
* ultradns_probe_http: add
* doc: add ultradns_probe_ping
* doc: add ultradns_probe_http
* ultradns_record: remove duplication from error messages
* doc: cleanup typos in ultradns
* ultradns_probe_ping: add test for pool-level probe
* Clean documentation
* ultradns: pull makeSetFromStrings() up to common.go
* ultradns_dirpool: log hashIPInfoIPs
Log the key and generated hashcode used to index ip_info.ips into a set.
* ultradns: simplify hashLimits()
Limits blocks only have the "name" attribute as their primary key, so
hashLimits() needn't use a buffer to concatenate.
Also changes log level to a more approriate DEBUG.
* ultradns_tcpool: convert rdata to schema.Set
RData blocks have the "host" attribute as their primary key, so it is
used by hashRdatas() to create the hashcode.
Tests are updated to use the new hashcode indexes instead of natural
numbers.
* ultradns_probe_http: convert agents to schema.Set
Also pull the makeSetFromStrings() helper up to common.go
* ultradns: pull hashRdatas() up to common
* ultradns_dirpool: convert rdata to schema.Set
Fixes TF-66
* ultradns_dirpool.conflict_resolve: fix default from response
UltraDNS REST API User Guide claims that "Directional Pool
Profile Fields" have a "conflictResolve" field which "If not
specified, defaults to GEO."
https://portal.ultradns.com/static/docs/REST-API_User_Guide.pdf
But UltraDNS does not actually return a conflictResolve
attribute when it has been updated to "GEO".
We could fix it in udnssdk, but that would require either:
* hide the response by coercing "" to "GEO" for everyone
* use a pointer to allow checking for nil (requires all
users to change if they fix this)
An ideal solution would be to have the UltraDNS API respond
with this attribute for every dirpool's rdata.
So at the risk of foolish consistency in the sdk, we're
going to solve it where it's visible to the user:
by checking and overriding the parsing. I'm sorry.
* ultradns_record: convert rdata to set
UltraDNS does not store the ordering of rdata elements, so we need a way
to identify if changes have been made even it the order changes.
A perfect job for schema.Set.
* ultradns_record: parse double-encoded answers for TXT records
* ultradns: simplify hashLimits()
Limits blocks only have the "name" attribute as their primary key, so
hashLimits() needn't use a buffer to concatenate.
* ultradns_dirpool.description: validate
* ultradns_dirpool.rdata: doc need for set
* ultradns_dirpool.conflict_resolve: validate
tags were documented, just not implemented
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMDnsZone_ -timeout 120m
=== RUN TestAccAzureRMDnsZone_importBasic
--- PASS: TestAccAzureRMDnsZone_importBasic (89.04s)
=== RUN TestAccAzureRMDnsZone_basic
--- PASS: TestAccAzureRMDnsZone_basic (92.91s)
=== RUN TestAccAzureRMDnsZone_withTags
--- PASS: TestAccAzureRMDnsZone_withTags (105.88s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 287.912s
* provider/pagerduty: Allow 'team_responder' role for pagerduty_user resource
* Change unit test to exercise 'team_responder' and reformat
* Update the test fixture to use the 'team_responder' role
* provider/aws: Support eu-west-2
This is the new London region - we don't have access yet but several
enquiries have come from customers who do.
* provider/aws: Support eu-west-2 region
* Update hosted_zones.go
releases.
When postgresql_schema_policy lands this attribute should be removed in
order to provide a single way of accomplishing setting permissions on
schema objects.
* provider/aws: data source for AWS Hosted Zone
* add caller_reference, resource_record_set_count fields, manage private zone and trailing dot
* fix fmt
* update documentation, use string function in hostedZoneNamewq
* add vpc_id support
* add tags support
* add documentation for hosted zone data source tags support
* provider/aws: Add the aws_eip data source
* Document the aws_eip data source on the website
* provider/aws: support query by public_ip for aws_eip data source
* Initial checkin for PR request
* Added an argument to provider to allow control over whether or not TLS Certs will skip verification. Controllable via provider or env variable being set
* Initial check-in to use refactored module
* Checkin of very MVP for creating/deleting host test which works and validates basic host creation and deletion
* Check in with support for creating hosts with variables working
* Checking in work to date
* Remove code that causes travis CI to fail while I debug
* Adjust create to accept multivale
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Squashing
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Check in refactored hostgroup support
* Check in refactored check_command, hosts, and hsotgroup with a few test
* Checking in service code
* Add in dependency for icinga2 provider
* Add documentation. Refactor, fix and extend based on feedback from Hashicorp
* Added checking and validation around invalid URL and unavailable server
* Add support to import databases. See docs.
* Add support for renaming databases
* Add support for all known PostgreSQL database attributes, including:
* "allow_connections"
* "lc_ctype"
* "lc_collate"
* "connection_limit"
* "encoding"
* "is_template"
* "owner"
* "tablespace_name"
* "template"
Both libpq(3) and github.com/lib/pq both use `sslmode`. Prefer this vs
the non-standard `ssl_mode`. `ssl_mode` is supported for compatibility
but should be removed in the future.
Changelog: yes
Also don't specify the default and rely on github.com/lib/pq (which uses "required"
and is different than what libpq(3) uses, which is "preferred" and unsupported by
github.com/lib/pq).
* Allow import of aws_security_groups with more than one source_security_group_id rule
* Add acceptable test for security group with multiple source rules.
When importing an `aws_vpc_peering_connection`, the code assumes that
the account under Terraform control is the initiator (requester) of the
VPC peering request. This holds true when the peering connection is
between two VPCs in the same account, or when the peering connection has
been initiated from the controlled account to another.
However, when the peering connection has been initiated from a foreign
account towards the account under management, importing the peering
connection into the statefile results in values of `peer_vpc_id` and
`vpc_id` being the opposite way round to what they should be, and in the
`peer_owner_id` being set to the managed account's ID rather than the
foreign account's ID.
This patch checks the Accepter and Requester Owner IDs against the AWS
connection's reported owner ID, and reverses the mapping if it is
determined that the VPC peering connection is owned by the foreign
account.
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMVirtualMachine_plan -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_plan
--- PASS: TestAccAzureRMVirtualMachine_plan (798.75s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 798.835s
This adds the new resource aws_snapshot_create_volume_permission which
manages the createVolumePermission attribute of snapshots. This allows
granting an AWS account permissions to create a volume from a particular
snapshot. This is often required to allow another account to copy a
private AMI.
The value is only multiplied by the API for topics in non-premium namespaces
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMServiceBusTopic_enablePartitioning -timeout 120m
=== RUN TestAccAzureRMServiceBusTopic_enablePartitioningStandard
--- PASS: TestAccAzureRMServiceBusTopic_enablePartitioningStandard (378.80s)
=== RUN TestAccAzureRMServiceBusTopic_enablePartitioningPremium
--- PASS: TestAccAzureRMServiceBusTopic_enablePartitioningPremium (655.00s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 1033.874s
AWS allows only the case-sensitive strings `Allow` and `Deny` to appear
in the `Effect` fields of IAM policy documents. Catch deviations from
this, including mis-casing, before hitting the API and generating an
error (the error is a generic 400 and doesn't indicate what part of the
policy doc is invalid).
* provider/datadog 9869: Validate credentials when initialising client.
* provider/datadog Pull in new version of go-datadog-api.
* provider/datadog Update testAccCheckDatadogMonitorConfigNoThresholds test config.
Fixes#8455, #5390
This add a new `no_device` attribute to `ephemeral_block_device` block,
which allows users omit ephemeral devices from AMI's predefined block
device mappings, which is useful for EBS-only instance types.
* provider/datadog #9375: Refactor tags to a list instead of a map.
Tags are allowed to be but not restricted to, key value pairs (ie: foo:bar)
but are esssentially strings. This changes allows using, and mixing of tags with
form "foo" and "foo:bar". It also allows using duplicate keys like "foo:bar" and "foo:baz".
* provider/datadog update import test.
This commit extracts the GPG code used for aws_iam_user_login_profile
into a library that can be reused for other resources, and updates the
call sites appropriately.
* provider/azurerm: Bump sdk version to 7.0.1
* Fixing the build (#10489)
* Fixing the broken tests (#10499)
* Updating the method signatures to match (#10533)
Fixes#10463
I'm really surprised this flew under the radar for years...
By having unique PRNGs, the SSH communicator could and would
generate identical ScriptPaths and two provisioners running in parallel
could overwrite each other and execute the same script. This would
happen because they're both seeded by the current time which could
potentially be identical if done in parallel...
Instead, we share the rand now so that the sequence is guaranteed
unique. As an extra measure of robustness, we also multiple by the PID
so that we're also protected against two processes at the same time.
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
* add rds db for opsworks
* switched to stack in vpc
* implement update method
* add docs
* implement and document force new resource behavior
* implement retry for update and delete
* add test that forces new resource
This commit changes allowed_address_pairs from a TypeList to a TypeSet
allowing for arbitrary ordering. This solves the issue where a user
specifies an address pair one way and OpenStack returns a different
order.
* Update to latest version of go-datadog-api
* Updates to latest go-datadog-api version, which adds more complete
timeboard support.
* Add more complete timeboard support
* Adds in support for missing timeboard fields, so now we can have nice
things like conditional formats and more.
* Document new fields in datadog_timeboard resource
* Add acceptance test for datadog timeboard changes
* Add new aws_vpc_endpoint_route_table_association resource.
This commit adds a new resource which allows to a list of route tables to be
either added and/or removed from an existing VPC Endpoint. This resource would
also be complimentary to the existing `aws_vpc_endpoint` resource where the
route tables might not be specified (not a requirement for a VPC Endpoint to
be created successfully) during creation, especially where the workflow is
such where the route tables are not immediately known.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
Additions by Kit Ewbank <Kit_Ewbank@hotmail.com>:
* Add functionality
* Add documentation
* Add acceptance tests
* Set VPC endpoint route_table_ids attribute to "Computed"
* Changes after review - Set resource ID in create function.
* Changes after code review by @kwilczynski:
* Removed error types and simplified the error handling in 'resourceAwsVPCEndpointRouteTableAssociationRead'
* Simplified logging in 'resourceAwsVPCEndpointRouteTableAssociationDelete'
Update our instance template to include metadata_startup_script, to
match our instance resource. Also, we've resolved the diff errors around
metadata.startup-script, and people want to use that to create startup
scripts that don't force a restart when they're changed, so let's stop
disallowing it.
Also, we had a bunch of calls to `schema.ResourceData.Set` that ignored
the errors, so I added error handling for those calls. It's mostly
bundled with this code because I couldn't be sure whether it was the
root of bugs or not, so I took care of it while addressing the startup
script issue.
* provider/openstack: Detect Region for Importing Resources
This commit changes the way the OpenStack region is detected and set.
Any time a region is required, the region attribute will first be
checked. Next, the OS_REGION_NAME environment variable will be checked.
While schema.EnvDefaultFunc handles this same situation, it is not
applicable when importing resources.
* provider/openstack: No longer ignore region in importing tests
* provider/openstack: Network and Subnet Import Fixes
This commit fixes the OpenStack Network and Subnet resources so that
importing of those resources is successful.
This change doesn't make much sense now, as projects are read-only
anyways, so there's not a lot that importing really does for you--you
can already reference pre-existing projects just by defining them in
your config.
But as we discussed #10425, this change made more and more sense. In a
world where projects can be created, we can no longer reference
pre-existing projects just by defining them in config. We get that
ability back by making projects importable.
* provider/aws: Add DeploymentRollback as a valid TriggerEvent type
* provider/aws: Add auto_rollback_configuration to aws_codedeploy_deployment_group
* provider/aws: Document auto_rollback_configuration
- part of aws_codedeploy_deployment_group
* provider/aws: Support removing and disabling auto_rollback_configuration
- part of aws_codedeploy_deployment_group resource
- when removing configuration, ensure events are removed
- when disabling configuration, preserve events in case configuration is re-enabled
* provider/aws: Add alarm_configuration to aws_codedeploy_deployment_group
* provider/aws: Document alarm_configuration
- part of aws_codedeploy_deployment_group
* provider/aws: Support removing alarm_configuration
- part of aws_codedeploy_deployment_group resource
- disabling configuration doesn't appear to work...
* provider/aws: Refactor auto_rollback_configuration tests
- Add create test
- SKIP failing test for now
- Add tests for build & map functions
* provider/aws: Refactor new aws_code_deploy_deployment_group tests
- alarm_configuration and auto_rollback_configuration only
- add assertions to deployment_group basic test
- rename config funcs to be more easy to read
- group public tests together
* provider/aws: A max of 10 alarms can be added to a deployment group.
- aws_code_deploy_deployment_group.alarm_configuration.alarms
- verified this causes test failure with expected exception
* provider/aws: Test disabling alarm_configuration and auto_rollback_configuration
- the tests now pass after rebasing the latest master branch
Google's Backend Services gives users control over the session affinity modes.
Let's allow Terraform users to leverage this option.
We don't change the default value ("NONE", as provided by Google).
* provider/azurerm: support import of route
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMRoute_import -timeout 120m
=== RUN TestAccAzureRMRoute_importBasic
--- PASS: TestAccAzureRMRoute_importBasic (166.99s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 167.066s
* provider/azurerm: fix route_table not setting routes
The resource wasn't actually setting the routes in the create/update method,
this went unnoticed as it also didn't read the routes array back to state.
Fixes#10316
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMRouteTable -timeout 120m
=== RUN TestAccAzureRMRouteTable_basic
--- PASS: TestAccAzureRMRouteTable_basic (122.96s)
=== RUN TestAccAzureRMRouteTable_disappears
--- PASS: TestAccAzureRMRouteTable_disappears (121.12s)
=== RUN TestAccAzureRMRouteTable_withTags
--- PASS: TestAccAzureRMRouteTable_withTags (136.01s)
=== RUN TestAccAzureRMRouteTable_multipleRoutes
--- PASS: TestAccAzureRMRouteTable_multipleRoutes (155.44s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 535.612s
* provider/azurerm: support import of route_table
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMRouteTable_import -timeout 120m
=== RUN TestAccAzureRMRouteTable_importBasic
--- PASS: TestAccAzureRMRouteTable_importBasic (121.90s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 121.978s
* provider/aws: Generate name for TestAccElasticBeanstalkApplicationImport
This allows tests to run concurrently.
* provider/aws: Generate names for TestAWSElasticBeanstalkEnvironment_importBasic
This allows tests to run concurrently.
* provider/azurerm: support import of virtual_machine
TF_ACC=1 go test ./builtin/providers/azurerm -v -run "TestAccAzureRMVirtualMachine_(basic|import)" -timeout 120m
=== RUN TestAccAzureRMVirtualMachine_importBasic
--- PASS: TestAccAzureRMVirtualMachine_importBasic (561.08s)
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine (677.49s)
=== RUN TestAccAzureRMVirtualMachine_basicLinuxMachine_disappears
--- PASS: TestAccAzureRMVirtualMachine_basicLinuxMachine_disappears (674.21s)
=== RUN TestAccAzureRMVirtualMachine_basicWindowsMachine
--- PASS: TestAccAzureRMVirtualMachine_basicWindowsMachine (1105.18s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 3017.970s
* provider/azurerm: support import of servicebus_namespace
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMServiceBusNamespace_import -timeout 120m
=== RUN TestAccAzureRMServiceBusNamespace_importBasic
--- PASS: TestAccAzureRMServiceBusNamespace_importBasic (345.80s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 345.879s
* provider/azurerm: document import of servicebus_topic and servicebus_subscription
* provider/azurerm: support import of dns record resources
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMDns[A-z]+Record_importBasic -timeout 120m
=== RUN TestAccAzureRMDnsARecord_importBasic
--- PASS: TestAccAzureRMDnsARecord_importBasic (102.84s)
=== RUN TestAccAzureRMDnsAAAARecord_importBasic
--- PASS: TestAccAzureRMDnsAAAARecord_importBasic (100.59s)
=== RUN TestAccAzureRMDnsCNameRecord_importBasic
--- PASS: TestAccAzureRMDnsCNameRecord_importBasic (98.94s)
=== RUN TestAccAzureRMDnsMxRecord_importBasic
--- PASS: TestAccAzureRMDnsMxRecord_importBasic (107.30s)
=== RUN TestAccAzureRMDnsNsRecord_importBasic
--- PASS: TestAccAzureRMDnsNsRecord_importBasic (98.55s)
=== RUN TestAccAzureRMDnsSrvRecord_importBasic
--- PASS: TestAccAzureRMDnsSrvRecord_importBasic (100.19s)
=== RUN TestAccAzureRMDnsTxtRecord_importBasic
--- PASS: TestAccAzureRMDnsTxtRecord_importBasic (97.49s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 706.000s
* provider/azurerm: support import of cdn_endpoint, document profile import
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMCdnEndpoint_import -timeout 120m
=== RUN TestAccAzureRMCdnEndpoint_importWithTags
--- PASS: TestAccAzureRMCdnEndpoint_importWithTags (207.83s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 207.907s
* provider/azurerm: support import of sql_server, fix sql_firewall import
TF_ACC=1 go test ./builtin/providers/azurerm -v -run TestAccAzureRMSql[A-z]+_importBasic -timeout 120m
=== RUN TestAccAzureRMSqlFirewallRule_importBasic
--- PASS: TestAccAzureRMSqlFirewallRule_importBasic (153.72s)
=== RUN TestAccAzureRMSqlServer_importBasic
--- PASS: TestAccAzureRMSqlServer_importBasic (119.83s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 273.630s
Although the aws_iam_policy has a problem of normalization (refs #8350),
I think it would be useful simply to add JSON syntax validation.
I wasted a lot of time with JSON syntax errors.
Validate the aws_iam_policy using the validateJsonString helper.