The Consul KV store limits the size of the values in the KV store to 524288
bytes. Once the state reaches this limit Consul will refuse to save it. It is
currently possible to try to bypass this limitation by enable Gzip but the issue
will manifest itself later. This is particularly inconvenient as it is possible
for the state to reach this limit without changing the Terraform configuration
as datasources or computed attributes can suddenly return more data than they
used to. Several users already had issues with this.
To fix the problem once and for all we now split the payload in chunks of 524288
bytes when they are to large and store them separatly in the KV store. A small
JSON payload that references all the chunks so we can retrieve them later and
concatenate them to reconstruct the payload.
While this has the caveat of requiring multiple calls to Consul that cannot be
done as a single transaction as those have the same size limit, we use unique
paths for the chunks and CAS when setting the last payload so possible issues
during calls to Put() should not result in unreadable states.
Closes https://github.com/hashicorp/terraform/issues/19182
When the path ends with / (e.g. `path = "tfstate/"), the lock
path used will contain two consecutive slashes (e.g. `tfstate//.lock`) which
Consul does not accept.
This change the lock path so it is sanitized to `tfstate/.lock`.
If the user has two different Terraform project, one with `path = "tfstate"` and
the other with `path = "tfstate/"`, the paths for the locks will be the same
which will be confusing as locking one project will lock both. I wish it were
possible to forbid ending slashes altogether but doing so would require all
users currently having an ending slash in the path to manually move their
Terraform state and would be a poor user experience.
Closes https://github.com/hashicorp/terraform/issues/15747
When locking was enabled with the Consul backend and the lock not properly
released, the `terraform force-unlock <lock_id>` command would do nothing as
its implementation would exit early in that case.
It now destroys the session that created the lock and clean both the lock and
the lock-info keys.
A regression test is added to TestConsul_destroyLock() to catch the issue if it
happends again.
Closes https://github.com/hashicorp/terraform/issues/22174
Most of the state package has been deprecated by the states package.
This PR replaces all the references to the old state package that
can be done simply - the low-hanging fruit.
* states: move state.Locker to statemgr
The state.Locker interface was a wrapper around a statemgr.Full, so
moving this was relatively straightforward.
* command: remove unnecessary use of state package for writing local terraform state files
* move state.LocalState into terraform package
state.LocalState is responsible for managing terraform.States, so it
made sense (to me) to move it into the terraform package.
* slight change of heart: move state.LocalState into clistate instead of
terraform
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
Add a way to inject network errors by setting an immediate deadline on
open consul connections. The consul client currently doesn't retry on
some errors, and will force us to lose our lock.
Once the consul api client is fixed, this test will fail.
Updated the vendored consul which no longer requires the channel adapter
to convert a `chan stuct{}` to a `<-chan struct{}`.
Call testutil.NewTestServerConfigT with the new signature.
When a consul lock is lost, there is a possibility that the associated
session is still active. Most commonly, the long request to watch the
lock key may error out, while the session is continually refreshed at a
rate of TTL/2.
First have the lock monitor retry the lock internally for at least 10
seconds (5 attempts with the default 2 second wait time). In most cases
this will reconnect on the first try, keeping the lock channel open.
If the consul lock can't recover itself, then cancel the session as soon
as possible (terminating the PreiodicRenew will call Session.Destroy),
and start over. In the worse case, the consul agents were split, and the
session still exists on the leader so we may need to wait for the old
session TTL, plus the LockWait time to renew the lock.
We use a Context for the cancellation channels here, because that
removes the need to worry about double-closes and nil channels. It
requires an awkward adapter goroutine for now to convert the Done()
`<-chan` to a `chan` for PeriodicRenew, but makes the rest of the code
safer in the long run.
Consul locks are based on liveness, and may be lost due timeouts,
network issued, etc. If the client determines the lock was lost, attempt
to reacquire the lock immediately.
The client was also not using the `lock` config option. Disable locks if
that is not set.
This matches the consul cli behavior, where locks are cleaned up after
use.
Return an error from re-locking the state. This isn't required by the
Locker interface, but it's an added sanity check for state operations.
What was incorrect here was returning an empty ID and error, which would
indicate that Lock/Unlock isn't supported.
Gove LockInfo a Marshal method for easy serialization, and a String
method for more readable output.
Have the state.Locker implementations use LockError when possible to
return LockInfo and an error.
Use consul locks to implement state locking. The lock path is state path
+ "/.lock" which matches the consul cli default for locks. Lockinfo is
stored at path + "/.lockinfo".