* provider/scaleway: increase wait for server time
according to the scaleway community, shutdown/ startup might actually take an
hour. since a regular shutdown transfers data this is bound by the size of the
actual volumes in use.
https://community.online.net/t/solving-the-long-shutdown-boot-when-only-needed-to-attach-detach-a-volume/326
anyhow, 20 minutes seems quite optimistic, and we've seen some timeout errors in
the logs, too
* provider/scaleway: clear cache on volume attachment
the volume attachment errors quite often, and while I have no hard evidence
(yet) I guess it might be related to the cache that the official scaleway SDK
includes.
for now this is just a tiny experiment, clearing the cache when creating/
destroying volume attachments. let's see if this improves anything, really
* provider/scaleway: guard against attaching already attached volumes
* provider/scaleway: use cheaper instance types for tests
Scaleway bills by the hour and C2S costs much more than C1, since in the tests
we just spin up instances, to destroy them later on...
Looks like sometimes it takes some time for IAM certificates to
propagate, which can cause errors on ALB listener creation.
Possibly same thing as hashicorp/terraform#5178, but for ALB
now instead of ELB.
This was discovered via acceptance tests, specifically the
TestAccAWSALBListener_https test. Updated the creation process to wait
on CertificateNotFound for a max of 5min.
UniqueId attempted to provide an ordered unique id by using a nanosecond
timestamp, but doesn't take into account that time is not monotonic
increasing. This provides an implementation that will always be
increasing.
Ensure that each instance of BasucGraphBuilder gets a name corresponding
to the Builder which created it. This allows us to differentiate the
graphs in the logs.
Fixes#10125
If the elements are computed and the field is ForceNew, then we should
mark the computed count as potentially forcing a new operation.
Example, assuming `groups` forces new...
**Step 1:**
groups = ["1", "2", "3"]
At this point, the resource isn't create, so this should result in a
diff like:
CREATE resource:
groups: "" => ["1", "2", "3"]
**Step 2:**
groups = ["${computedvar}"]
The OLD behavior was:
UPDATE resource
groups.#: "3" => "computed"
This would cause a diff mismatch because if `${computedvar}` was
different then it should force new. The NEW behavior is:
DESTROY/CREATE resource:
groups.#: "3" => "computed" (forces new)
This doesn't cause any practical issues as far as I'm aware (couldn't
get any test to fail), but caused shadow errors since it wasn't matching
the prior behavior.
Fixes#10122
The simple fix was that we forgot to close `ReadDataApply` for the
provider. But I've always felt that this section of the code was brittle
and I wanted to put in a more robust solution. The `shadow.Close` method
uses reflection to automatically close all values.
This encodes vertex debug information into the graph log when a vertex
is visited during a walk operation. These can ordered to show how the
Graph was walked.
Add a mutex to the encoder so it can be used during a parallel walk.
Moved string literal constants used for marshaling to pre-defined constants.
Did some renaming to make the marshal* structures more consistent.