Move the S3 State from a legacy remote state to an official backend.
This increases test coverage, uses a set schema for configuration, and
will allow new backend features to be implemented for the S3 state, e.g.
"environments".
This adds a "lock" config (default true) to allow users to optionally
disable state locking with Consul. This is necessary if the token given
doesn't have session permission and is necessary for backwards
compatibility.
Gove LockInfo a Marshal method for easy serialization, and a String
method for more readable output.
Have the state.Locker implementations use LockError when possible to
return LockInfo and an error.
Have LocalState store and check the lock ID, and strictly enforce
unlocking with the correct ID.
This isn't required for local lock correctness, as we track the file
descriptor to unlock, but it does provide a varification that locking
and unlocking is done correctly throughout terraform.
During backend initialization, especially during a migration, there is a
chance that an existing state could be overwritten.
Attempt to get a locks when writing the new state. It would be nice to
always have a lock when reading the states, but the recursive structure
of the Meta.Backend config functions makes that quite complex.
* Enable remote s3 state support for assume role
- provide role_arn in backend config to enable assume role
Fixes#8739
* Check for errors after obtaining credentials
Close and remove the file descriptor from LocalState if we Unlock the
state. Also remove an empty state file if we created it and it was never
written to. This is mostly to clean up after tests, but doesn't hurt to
not leave empty files around.
This makes it more apparent that the information passed in isn't
required nor will it conform to any standard. There may be call sites
that can't provide good contextual info, and we don't want to count on
that value.
The old behavior in this situation was to simply delete the file. Since
we now have a lock on this file we don't want to close or delete it, so
instead truncate the file at offset 0.
Fix a number of related tests
Use a DynamoDB table to coodinate state locking in S3.
We use a simple strategy here, defining a key containing the value of
the bucket/key of the state file as the lock. If the keys exists, the
locks fails.
TODO: decide if locks should automatically be expired, or require manual
intervention.
In order to provide lockign for remote states, the Cache state,
remote.State need to expose Lock and Unlock methods. The actual locking
will be done by the remote.Client, which can implement the same
state.Locker methods.
Add the LockUnlock methods to LocalState and BackupState.
The implementation for LocalState will be platform specific. We will use
OS-native locking on the state files, speficially locking whichever
state file we intend to write to.
Changed from state.StateLocker to remove the stutter.
State implementations can provide Lock/Unlock methods to lock the state
file. Remote clients can also provide these same methods, which will be
called through remote.State.