Looks like while we were checking errors correctly when ExpectError was
set, we weren't checking for the *absence* of an error, which is should
be checked as well (no error is still not the error we are looking for).
Added a few more tests for ExpectError as well.
Previously having a config was mutually exclusive with running an import,
but we need to provide a config so that the provider is declared, or else
we can't actually complete the import in the future world where providers
are installed dynamically based on their declarations.
* provider/aws: Add Sweeper setup, Sweepers for DB Option Group, Key Pair
* provider/google: Add sweeper for any leaked databases
* more recursion and added LC sweeper, to test out the Dependency path
* implement a dependency example
* implement sweep-run flag to filter runs
* stub a test for TestMain
* test for multiple -sweep-run list
the terraform package doesn't know about TestProvider, so don't put the
hooks in terraform.MockResourceProvider. Wrap the mock in the test where
we need to check the TestProvider functionality.
This commit adds a function which composes a series of TestFuncs, but
will run all tests before returning an error, unlike ComposeTestFunc.
This is useful when verifying contents of state in acceptance tests and
it is desirable to see all the failing cases in one run for slow
resources.
As I've been working through the resources, I'm finding that a lot are
going to need some serious work. Given we have hundreds, I think it
might be prudent to make this opt-in for now and we can revisit
automatic/opt-out at some future point.
Importability will likely be opt-in it appears so this will match up
with that.
I forgot to add `Computed: true` when I made the "key_name" field
optional in #1751.
This made the behavior:
* Name generated in Create and set as ID
* Follow up plan (without refresh) was nice and empty
* During refresh, name gets cleared out on Read, causing a bad diff on
subsequent plans
We can automatically catch bugs like this if we add yet another
verification step to our resource acceptance tests -> a post
Refresh+Plan that we verify is empty.
I left the non-refresh Plan verification in, because it's important that
_both_ of these are empty after an Apply.
Each acceptance test step plays a Refresh, Plan, Apply for a given
config. This adds a follow up Plan and fails the test if it does not
come back empty. This will catch issues with perpetual, unresolvable
diffs that crop up here and there.
This is going to cause a lot of our existing acceptance tests to fail -
too many to roll into a single PR. I think the best plan is to land this
in master and then fix the failures (each of which should be catching a
legitimate provider bug) one by one until we get the provider suites
back to green.