* provider/scaleway: increase wait for server time
according to the scaleway community, shutdown/ startup might actually take an
hour. since a regular shutdown transfers data this is bound by the size of the
actual volumes in use.
https://community.online.net/t/solving-the-long-shutdown-boot-when-only-needed-to-attach-detach-a-volume/326
anyhow, 20 minutes seems quite optimistic, and we've seen some timeout errors in
the logs, too
* provider/scaleway: clear cache on volume attachment
the volume attachment errors quite often, and while I have no hard evidence
(yet) I guess it might be related to the cache that the official scaleway SDK
includes.
for now this is just a tiny experiment, clearing the cache when creating/
destroying volume attachments. let's see if this improves anything, really
* provider/scaleway: guard against attaching already attached volumes
* provider/scaleway: use cheaper instance types for tests
Scaleway bills by the hour and C2S costs much more than C1, since in the tests
we just spin up instances, to destroy them later on...
* provider/scaleway: fix scaleway_volume_attachment with count > 1
since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.
sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too
fixes#9417
* provider/scaleway: use mutexkv to lock per-resource
following @dcharbonnier suggestion. thanks!
* provider/scaleway: cleanup waitForServerState signature
* provider/scaleway: store serverID in var
* provider/scaleway: correct imports
* provider/scaleway: increase timeouts
* provider/scaleway speedup server deletion
using `terminate` instead of `poweroff` leads to a faster shutdown
fixes#9430
* provider/scaleway: extract server shutdown code
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger