Previously init would crash if given these options:
-backend=false -get-plugins=true
This is because the state is used as a source of provider dependency
information, and we need to instantiate the backend to get the state.
To avoid the crash, we now use the following adjusted behavior:
- if -backend=true, we behave as before
- if -backend=false, we instead try to instantiate the backend the same
way any other command would, without modifying its configuration
- if we're able to instantiate the backend, we use it to fetch state
for dependency resolution purposes
- if the backend is not instantiable then we assume it's not yet
configured and proceed with a nil state, which may cause us to see an
incomplete picture of the dependencies but still allows the install
to succeed. Subsequently running "terraform plan" will not work until
the backend is (re-)initialized, so the incomplete picture of required
plugins is safe.
This takes care of a few dangling cases where we were still stringifying
empty version constraints, which creates confusing error messages due to
it stringing as the empty string.
For the "no suitable versions available" message, we fall back on the
"provider not found" message if no versions were found even though it's
unconstrained. This should only happen in an edge case where the
provider's index page exists on the releases server but no versions are
yet present.
For the message about plugin protocol versions, this again is an edge
case since with no constraints this should happen only if we release
an incompatible Terraform version but don't release a new version of the
plugin that's compatible. In this case we just show the constraint as
"(any version)" to make sure we always show _something_.
Since the transformer that changed stateless nodes in refresh to
NodePlannableResourceInstance is not being used anymore, this test
needed to be adjusted to ensure that the right output was expected.
Changed the language of this field to indicate that this diff is not a
"real" diff, in that it should not be acted on, versus a "quiet" mode,
which would indicate just simply to act silently.
Previously we only did this when _upgrading_, but that's unnecessarily
specific and confusing since e.g. plugins can get upgraded implicitly by
constraint changes, which would not then trigger the purge process.
Instead, we'll assume that the user is able to easily re-download plugins
that were purged here, or if they need more specific guarantees they will
manage manually a plugin directory and disable the auto-install behavior
using `-plugin-dir`.
Now we are able to recognize and handle a few special error situations
from plugin installation with more verbose error messages that give the
user better feedback on how to proceed.
Some errors from Get are essentially user error, so we want to be able to
recognize them and give the user good feedback on how to proceed.
Although sentinel values are not an ideal solution to this, it's something
reasonably simple we can do to get this done without lots of refactoring.
Fetch the SHA256SUMS file and verify it's signature before downloading
any plugins.
This embeds the hashicorp public key in the binary. If the publickey is
replaced, new releases will need to be cut anyway. A
--verify-plugin=false flag will be added to skip signature verification
in these cases.
This fixes a bug with the new refresh graph behaviour where a resource
was being counted twice in the UI on part of being scaled out:
* We are no longer transforming refresh nodes without state to
plannable resources (the transformer will be removed shortly)
* A Quiet flag has been added to EvalDiff and InstanceDiff - this
allows for the flagging of a diff that should not be treated as real
diff for purposes of planning
* When there is no state for a refresh node now, a new path is taken
that is similar to plan, but flags Quiet, and does nothing with the
diff afterwards.
Tests pending - light testing has confirmed this should fix the double
count issue, but we should have some tests to actually confirm the bug.
This guide covers assorted best practices and caveats for running
Terraform within orchestration tools and other automation. It provides
general examples and guidance, with the intent that this advice can be
adapted by the reader to a concrete implementation within a selected
orchestration tool.
This guide is based both on our in-house experience with Terraform
Enterprise and on in-house solutions we are aware of in certain
organizations.
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.
This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.
We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.
Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.
This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
This is similar in purpose to Equals but it takes a hierarchical approach
where modules contain their child modules, resources are contained by
their modules, and indexed resource instances are contained by their
resource names.
Unlike "Equals", Contains is intended to be transitive, so if A contains B
and B contains C, then C necessarily contains A. It is also directional:
if A contains B then B does not also contain A unless A and B are
identical. This results in more intuitive behavior for use-cases where
the goal is to select a portion of the address space for an operation.
Since the command package also needs to know about the specific OS_ARCH
directories, remove the logic fom the discovery package.
This doesn't completely remove the knowledge of the path from discovery,
in order to maintain the current behavior of skipping legacy plugin
names within a new-style path.
* initial commit - 101-vm-from-user-image
* changed branch name
* not deploying - storage problems
* provisions vm but image not properly prepared
* storage not correct
* provisions properly
* changed main.tf to azuredeploy.tf
* added tfvars and info for README
* tfvars ignored and corrected file ext
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* deploy.sh to be executable
* executable deploy files
* added CI files; changed vars
* prep for PR
* removal of old folder
* prep for PR
* wrong args for travis
* more PR prep
* updated README
* commented out variables in terraform.tfvars
* Topic 101 vm from user image (#2)
* initial commit - 101-vm-from-user-image
* added tfvars and info for README
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* prep for PR
* added new template
* oops, left off master
* prep for PR
* correct repository for destination
* renamed scripts to be more intuitive; added check for docker
* merge vm simple; vm from image
* initial commit
* deploys locally
* updated deploy
* consolidated deploy and after_deploy into a single script; simplified ci process; added os_profile_linux_config
* added terraform show
* changed to allow http & https (like ARM tmplt)
* changed host_name & host_name variable desc
* added az cli check
* on this branch, only build test_dir; master will aggregate all the examples
* merge master
* added new constructs/naming for deploy scripts, etc.
* suppress az login output
* suppress az login output
* forgot about line breaks
* breaking build as an example
* fixing broken build example
* merge of CI config
* fixed grammar in readme
* prep for PR
* took out armviz button and minor README changes
* changed host_name
* fixed merge conflicts
* changed host_name variable
* updating Hashicorp's changes to merged simple linux branch
* updating files to merge w/master and prep for Hashicorp pr
* Revert "updating files to merge w/master and prep for Hashicorp pr"
This reverts commit b850cd5d2a858eff073fc5a1097a6813d0f8b362.
* Revert "updating Hashicorp's changes to merged simple linux branch"
This reverts commit dbaf8d14a9cdfcef0281919671357f6171ebd4e6.
* removing vm from user image example from this branch
* removed old branch
* azure-2-vms-loadbalancer-lbrules (#13)
* initial commit
* need to change lb_rule & nic
* deploys locally
* updated README
* updated travis and deploy scripts for Hari's repo
* renamed deploy script
* clean up
* prep for PR
* updated readme
* fixing conflict in .travis.yml
* initial commit; in progress
* in progress
* in progress; encryption fails
* in progress
* deploys successfully locally
* clean up; deploy typo fixed
* merging hashi master into this branch
* troubleshooting deploy
* added missing vars to deploy script
* updated README, outputs, and added graph
* simplified outputs
* provisions locally
* cleaned up vars
* fixed chart on README
* prepping for pr
* fixed merge conflict
* initial commit
* provisions locally; but azuremysql.sh script fails
* commented out provider
* commenting out provider vars
* tf fmt / uncommented Ext - will fail
* testing other examples
* changed os version for script compatability; changed command
* removed ssh from output (no nsg)
* changed travis to test only this topic's dir
* added nsg
* testing encrypt-running-linux
* fixed IPs and validation
* cleanup merge conflicts
* updated validation cmd; reverted non-topic ci changes
* in progress; new branch for updating CI's permanent resources
* updated travis.yml branch
* pinned version 0.2.10 azuresdk/azure-cli-python
* testing vm-specialized-vhd
* added subnet var
* testing 2 lb template
* testing encrypt-running-linux
* changed disk size
* testing all examples; new var names
* testing vm-from-user-image
* testing vm-specialized-vhd
* testing vm-custom-image WindowsImage
* test all examples
* changed storage account for vm-custom-image
* changed existing_subnet_id variable
* correcting env var for disk name
* testing all examples
* testing all examples; commenting out last two unmerged examples
* added graph to cdn readme
* merged hashi master into this branch
* testing all examples
* delete os disk
* cleanup fixes for deleting CI resources
* manually deleting resources w/azure cli
* reverted to hashicorp's .travis.yml
The -plugin-dir option lets the user specify custom search paths for
plugins. This overrides all other plugin search paths, and prevents the
auto-installation of plugins.
We also make sure that the availability of plugins is always checked
during init, even if -get-plugins=false or -plugin-dir is set.
It turns out that `d.GetOk` also returns `false` when the user _did_ actually supply a value for it in the config, but the value itself needs to be evaluated before it can be used.
So instead of passing a `ResourceData` we now pass a `ResourceConfig`
which makes much more sense for doing the validation anyway.
* initial commit - 101-vm-from-user-image
* changed branch name
* not deploying - storage problems
* provisions vm but image not properly prepared
* storage not correct
* provisions properly
* changed main.tf to azuredeploy.tf
* added tfvars and info for README
* tfvars ignored and corrected file ext
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* deploy.sh to be executable
* executable deploy files
* added CI files; changed vars
* prep for PR
* removal of old folder
* prep for PR
* wrong args for travis
* more PR prep
* updated README
* commented out variables in terraform.tfvars
* Topic 101 vm from user image (#2)
* initial commit - 101-vm-from-user-image
* added tfvars and info for README
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* prep for PR
* added new template
* oops, left off master
* prep for PR
* correct repository for destination
* renamed scripts to be more intuitive; added check for docker
* merge vm simple; vm from image
* initial commit
* deploys locally
* updated deploy
* changed to allow http & https (like ARM tmplt)
* changed host_name & host_name variable desc
* merge master
* added new constructs/naming for deploy scripts, etc.
* suppress az login output
* merge of CI config
* prep for PR
* took out armviz button and minor README changes
* changed host_name
* fixed merge conflicts
* changed host_name variable
* updating Hashicorp's changes to merged simple linux branch
* updating files to merge w/master and prep for Hashicorp pr
* Revert "updating files to merge w/master and prep for Hashicorp pr"
This reverts commit b850cd5d2a858eff073fc5a1097a6813d0f8b362.
* Revert "updating Hashicorp's changes to merged simple linux branch"
This reverts commit dbaf8d14a9cdfcef0281919671357f6171ebd4e6.
* work in progress; waiting on support for lb inbound nat & autoscale settings
* changing .travis.yml for this branch
* updated deploy validation; readme; travis.yml
* in progress; lb inbound nat pool id argument added
* deploys vmss, not autoscale (no resource)
* merging hashicorp master into this branch
* chmod for deploy scripts
* cleaned up main.tf
* ran tf fmt
* fixed typo in travis.yml
* pinning azuresdk/azure-cli-python version
* typo
* adding comments
* provisions withouth autoscale
* fixed clean up to destroy rg
* renamed example directory
* reverted to Hashicorp's travis.yml
* merge conflict - return line
* merge conflict - white space
* updated README
* initial commit - 101-vm-from-user-image
* changed branch name
* not deploying - storage problems
* provisions vm but image not properly prepared
* storage not correct
* provisions properly
* changed main.tf to azuredeploy.tf
* added tfvars and info for README
* tfvars ignored and corrected file ext
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* deploy.sh to be executable
* executable deploy files
* added CI files; changed vars
* prep for PR
* removal of old folder
* prep for PR
* wrong args for travis
* more PR prep
* updated README
* commented out variables in terraform.tfvars
* Topic 101 vm from user image (#2)
* initial commit - 101-vm-from-user-image
* added tfvars and info for README
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* prep for PR
* added new template
* oops, left off master
* prep for PR
* correct repository for destination
* renamed scripts to be more intuitive; added check for docker
* merge vm simple; vm from image
* initial commit
* deploys locally
* updated deploy
* consolidated deploy and after_deploy into a single script; simplified ci process; added os_profile_linux_config
* added terraform show
* changed to allow http & https (like ARM tmplt)
* changed host_name & host_name variable desc
* added az cli check
* on this branch, only build test_dir; master will aggregate all the examples
* merge master
* added new constructs/naming for deploy scripts, etc.
* suppress az login output
* suppress az login output
* forgot about line breaks
* breaking build as an example
* fixing broken build example
* merge of CI config
* fixed grammar in readme
* prep for PR
* took out armviz button and minor README changes
* changed host_name
* fixed merge conflicts
* changed host_name variable
* updating Hashicorp's changes to merged simple linux branch
* updating files to merge w/master and prep for Hashicorp pr
* Revert "updating files to merge w/master and prep for Hashicorp pr"
This reverts commit b850cd5d2a858eff073fc5a1097a6813d0f8b362.
* Revert "updating Hashicorp's changes to merged simple linux branch"
This reverts commit dbaf8d14a9cdfcef0281919671357f6171ebd4e6.
* removing vm from user image example from this branch
* removed old branch
* azure-2-vms-loadbalancer-lbrules (#13)
* initial commit
* need to change lb_rule & nic
* deploys locally
* updated README
* updated travis and deploy scripts for Hari's repo
* renamed deploy script
* clean up
* prep for PR
* updated readme
* fixing conflict in .travis.yml
* add CI build tag
* initial commit; in progress
* in progress; merged Hashicorp master into this branch
* in progress
* in progress; created nsg
* added vars to deploy; added vnet
* chmod on deploy
* edited vars
* added var in travis
* added var
* added var to deploy
* added storage accounts
* fixed storage typos
* removed storage tags
* added PIPs
* changed dns name vars
* corrected PIP naming convention
* added availability sets
* added master-lb & rules
* added infra lb & rules
* added nics
* added VMs, ready for VM extensions, can modularize in the future
* added vm exts.; nsg is possibly broken; can't ssh
* in progress
* master ext succeeds
* in progress, infra and nodes exts not succeeding
* infra and node extensions fail
* provisions with extensions
* disabled password auth; ssh config added
* changed ssh key vars
* adding ssh var to deploy
* commenting out validation
* in progress; building openshift ext
* troubleshooting openshift deploy script
* changed vm names; added container
* increased os disk size
* in progress; troubleshooting deploy opnshft script
* Updated the readme
* updated deployment scripts; cleaned up variables, use remote-exec
* more variable cleanup
* more cleanup
* simplified password; got rid of a needless comment
* merge conflicts resolved
The timestamp prefix added in #8249 was removed in #10152 to ensure that
returned IDs really are properly ordered. However, this meant that IDs were no
longer ordered over multiple invocations of terraform, which was the main
motivation for adding the timestamp in the first place. This commit does a
hybrid: timestamp-plus-incrementing-counter instead of just incrementing counter
or timestamp-plus-random.