according to the official scaleway support, requests within the same session can
not be parallelized.
While I do not know for sure that this is a write-only limitation, I've
implemented it as a write-only limitation for now.
Previously requests like this would produce a 500 internal server error:
```
resource "scaleway_ip" "test_ip" {
count = 2
}
```
now this limitation should be lifted, for all scaleway resources
when creating IPs concurrently the Scaleway API starts to return 500 internal
server errors.
since the error goes away when limiting concurrent requests, as well as the fact that the golang net/http client is safe for concurrent use,
I'm assuming this is an API error on Scaleways side.
this CS introduces a workaround so terraform does not crash for now.
the work around needs to be removed once Scaleway fixes their API
* provider/scaleway: fix bootscript tests
the bootscript tests where failing because the referenced bootscript is no
longer available.
for now this just makes the tests pass again, next step should be to lookup a
bootscript so we don't have to update the tests all the time
* provider/scaleway: fix bootscript data source filter bug
when providing a name only the architecture was ignoerd, which can lead to
issues since some bootscript names are identical, even though the architecture
is different.
* provider/scaleway: remove data bootscript exact name test
the test fails after some time because scaleway removes older bootscripts.
let's just settle with filtered tests for now, which don't have this problem.
* Remove contradiction with Scaleway documentation
The parameters previously termed by Terraform:
1. Organization
2. Access key
Are referred to, respectively, by Scaleway [0] as:
1. Access key
2. Token
which is a confusing contradiction for a user.
Since Scaleway terms (1) both 'access key' [0] and 'organization ID' [1],
@nicolai86 suggested keeping the latter as already used, but changing (2) for
'token'; removing the contradiction.
This commit thus changes the parameters to:
1. Organization
2. Token
Closes#10815.
[0] - https://cloud.scaleway.com/#/credentials
[1] - https://www.scaleway.com/docs/retrieve-my-organization-id-throught-the-api
* Update docs to reflect Scaleway offering x86
Scaleway now provides x86 servers [0] as well as ARM.
This commit removes 'ARM' from various references suggesting that might be the
only option.
[0] - https://blog.online.net/2016/03/08/c2-insanely-affordable-x64-servers/
the CI sees many failing Scaleway tests due to request quotas being exceeded.
this PR aims to address this issue by switching from `resource.Retry`, which
waits 100ms between retries, to `resource.StateChangeConf` with a configured
delay of 5s between retries.
this should help us fixing the quota issue…
* provider/scaleway: increase wait for server time
according to the scaleway community, shutdown/ startup might actually take an
hour. since a regular shutdown transfers data this is bound by the size of the
actual volumes in use.
https://community.online.net/t/solving-the-long-shutdown-boot-when-only-needed-to-attach-detach-a-volume/326
anyhow, 20 minutes seems quite optimistic, and we've seen some timeout errors in
the logs, too
* provider/scaleway: clear cache on volume attachment
the volume attachment errors quite often, and while I have no hard evidence
(yet) I guess it might be related to the cache that the official scaleway SDK
includes.
for now this is just a tiny experiment, clearing the cache when creating/
destroying volume attachments. let's see if this improves anything, really
* provider/scaleway: guard against attaching already attached volumes
* provider/scaleway: use cheaper instance types for tests
Scaleway bills by the hour and C2S costs much more than C1, since in the tests
we just spin up instances, to destroy them later on...
this PR fixes a flakyness in the `scaleway_volume_attachment` resource, as
described below:
when attaching/ detaching a volume from a `scaleway_server`, the server needs to
be stopped. even though the code already waits for the server to be stopped, the
`PatchServer` calls gets a `400 server is being stopped or rebooted` error
response.
If the API returns the `400` we bail, leaving terraform in a broken state.
Assuming this is the only error that the API might return to us, as the payload
itself is correct, this retry behaviour should fix the issue.
\cc @stack72 PTAL
* provider/scaleway: fix scaleway_volume_attachment with count > 1
since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.
sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too
fixes#9417
* provider/scaleway: use mutexkv to lock per-resource
following @dcharbonnier suggestion. thanks!
* provider/scaleway: cleanup waitForServerState signature
* provider/scaleway: store serverID in var
* provider/scaleway: correct imports
* provider/scaleway: increase timeouts
* provider/scaleway speedup server deletion
using `terminate` instead of `poweroff` leads to a faster shutdown
fixes#9430
* provider/scaleway: extract server shutdown code
The tests were referencing an old bootscript - this just bumps the value
to the latest. The list of bootscripts can be found at
http://devhub.scaleway.com/#/bootscripts
bootscripts allow you to start Scaleway servers with a specific kernel version.
The `scaleway_server` has always had a bootscript parameter, and the
`scaleway_bootscript` datasource allows you to lookup bootscripts to be used in
conjunction with the `scaleway_server` resource.
* provider/scaleway: update api version
* provider/scaleway: expose ipv6 support, rename ip attributes
since it can be both ipv4 and ipv6, choose a more generic name.
* provider/scaleway: allow servers in different SGs
* provider/scaleway: update documentation
* provider/scaleway: Update docs with security group
* provider/scaleway: add testcase for server security groups
* provider/scaleway: make deleting of security rules more resilient
* provider/scaleway: make deletion of security group more resilient
* provider/scaleway: guard against missing server
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger