In order to pass the Terraform Associate Exam I followed the Official guide as a baseline for my studies, as I usually do for certification exams and it always works out well for me.
The HashiCorp Terraform Associate is considered a foundational level certification, so you can expect to have essential knowledge and skills on the concepts and not necessarily lots of hands-on practice. If you already work with Terraform you probably be able to go very quicky over the topics but it is important to know some details and also some commands that might not be very common for you. You must also know the features that exist on Terraform Enterprise packages and Terraform Cloud.
Important notes about the test: it is 57 questions long and is a proctored exam. You get 60 minutes to complete and you can mark questions for later review. the certification is valid for two years. Since Terraform is quickly and constantly evolving, it makes sense that it has a shorter validity than most certifications, which is three years.
My personal tips for this and any other certification exam: do lots of practice tests, go over every subject at least once even if you know it already from previous experience, make notes and don’t just study for the test, apply the knowledge in labs or personal projects. I would say that practicing is at least 2x more efficient in making the knowledge permanent than reading or watching videos.
Official Exam Information from Hashicorp can be found here: HashiCorp Certified: Terraform Associate (002)
Objective | Description |
---|---|
1 | Understand Infrastructure as Code (IaC) concepts |
2 | Understand Terraform’s purpose (vs other IaC) |
3 | Understand Terraform basics |
4 | Use the Terraform CLI (outside of core workflow) |
5 | Interact with Terraform modules |
6 | Navigate Terraform workflow |
7 | Implement and maintain state |
8 | Read, generate, and modify configuration |
9 | Understand Terraform Cloud and Enterprise capabilities |
What I’m gonna do here is go over each of the objectives and cover (almost) everything required to know in order to pass the exam. Most of the contents are provided on the official study guide and the exam review, so I’ll be using it as a reference. It’s probably be a long read, but I rather have it all in one place than making several posts and split the content.
1: Understand Infrastructure as Code (IaC) concepts
1a: Explain what IaC is
Infrastructure as code (IaC) tools allow you to manage infrastructure with configuration files rather than through a graphical user interface. IaC allows you to build, change, and manage your infrastructure in a safe, consistent, and repeatable way by defining resource configurations that you can version, reuse, and share.
1b Describe advantages of IaC patterns
IaC makes it easy to provision and apply infrastructure configurations by standardizing the workflow. This is accomplished by using a common syntax across a number of different infrastructure providers (e.g. AWS, GCP). Some key advantages of using IaC patterns are:
- Reusability: you can write or use modules to reuse code that are common between different projects;
- Consistency: managing infrastructure manually are prone to errors;
- Idempotency: no matter how many times you run your IaC and, what your starting state is, you will end up with the same end state. This simplifies the provisioning of Infrastructure and reduces the chances of inconsistent results;
- Versioning and Source Control: by using a VCS like Git you can get visibility and security on your infrastructure.
2 Understand Terraform’s purpose (vs other IaC)
2a Explain multi-cloud and provider-agnostic benefits
Terraform is cloud-agnostic and allows a single configuration to be used to manage multiple providers, and to even handle cross-cloud dependencies. With Terraform, users can manage a heterogeneous environment with the same workflow by creating a configuration file to fit the needs of that platform or project. Terraform plugins called providers let Terraform interact with cloud platforms and other services via their application programming interfaces (APIs). HashiCorp and the Terraform community have written over 1,000 providers to manage resources on Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, and DataDog, just to name a few.
2b Explain the benefits of state
Terraform keeps track of your real infrastructure in a state file, which acts as a source of truth for your environment. Terraform uses the state file to determine the changes to make to your infrastructure so that it will match your configuration.
3 Understand Terraform basics
3a Handle Terraform and provider installation and versioning
Terraform can be installed using the user’s terminal:
https://learn.hashicorp.com/tutorials/terraform/install-cli
|
|
Alternatively, Terraform can be manually installed by downloading the binary to your computer.
3b Describe plug-in based architecture
Terraform uses a plugin-based architecture to support hundreds of infrastructure and service providers. Initializing a configuration directory downloads and installs providers used in the configuration. Terraform plugins are compiled for a specific operating system and architecture, and any plugins in the root of the user’s plugins directory must be compiled for the current system. A provider is a plugin that Terraform uses to translate the API interactions with that platform or service.
Terraform must initialize a provider before it can be used. The initialization process downloads and installs the provider’s plugin so that it can later be executed. Terraform knows which provider(s) to download based on what is declared in the configuration files. For example:
|
|
The provider block can contain the following meta-arguments:
version
- constrains which provider versions are allowed- Note: HashiCorp recommends using provider requirements instead
alias
- enables using the same provider with different configurations (e.g. provisioning resources in multiple AWS regions)
By default, a plugin is downloaded into a subdirectory of the working directory so that each working directory is self-contained. As a consequence, if there are multiple configurations that use the same provider then a separate copy of its plugin will be downloaded for each configuration. To manually install a provider, move it to:
|
|
Given that provider plugins can be quite large, users can optionally use a local directory as a shared plugin cache. This is enabled through using the plugin_cache_dir
setting in the CLI configuration file.
|
|
This configuration ensures each plugin binary is downloaded only once.
3c Demonstrate using multiple providers
To instantiate the same provider for multiple configurations, use the alias
argument. For example, the AWS provider requires specifying the region argument. The following code block demonstrates how alias
can be used to provision resources across multiple regions using the same configuration files.
|
|
3d Describe how Terraform finds and fetches providers
Providers are released on a separate rhythm from Terraform itself, and so each provider has its own version number. For production use, consider constraining the acceptable provider versions in the configuration to ensure that new versions with breaking changes will not be automatically installed by terraform init
in future.
Any non-certified or third-party providers must be manually installed, since terraform init
cannot automatically download them.
The required_version
setting can be used to constrain which versions of Terraform can be used with the configuration.
|
|
The value for required_version
is a string containing a comma-delimited list of constraints. Each constraint is an operator followed by a version number. The following operators are allowed:
Operator | Usage | Example |
= (or no operator) | Use exact version | "= 0.14.3" Must use v0.14.3 |
!= | Version not equal | "!=0.14.3" Must not use v0.14.3 |
> or >= or < or <= | Version comparison | ">= 0.14.3" Must use a version greater than or equal to v0.14.3 |
~> | Pessimistic constraint operator that both both the oldest and newest version allowed | "~>= 0.14" Must use a version greater than or equal to v0.14 but less than v0.15 (which includes v0.14.3) |
Similarly, a provider version requirement can be specified. The following is an example limited the version of AWS provider:
|
|
It is recommended to use these operators in production to ensure the correct version is being used and avoid accidental upgrades that might have breaking changes.
3e Explain when to use and not use provisioners and when to use local-exec or remote-exec
Provisioners can be used to model specific actions on the local machine or on a remote machine. For example, a provisioner can enable uploading files, running shell scripts, or installing or triggering other software (e.g. configuration management) to conduct initial setup on an instance. Provisioners are defined within a resource block:
|
|
Multiple provisioner blocks can be used to define multiple provisioning steps.
|
|
Types of Provisioners
This section will cover the various types of generic provisioners. There are also vendor specific provisioners for configuration management tools (e.g. Salt, Puppet).
1. File
The file provisioner is used to copy files or directories from the machine executing Terraform to the newly created resource.
|
|
The file provisioner supports both ssh and winrm type connections.
2. local-exec
The local-exec
provisioner runs by invoking a process local to the user’s machine running Terraform. This is used to do something on the machine running Terraform, not the resource provisioned. For example, a user may want to create an SSH key on the local machine.
|
|
3. remote-exec
Comparatively, remote-exec
which invokes a script or process on a remote resource after it is created. For example, this may be used to bootstrap a newly provisioned cluster or to run a script.
|
|
Both SSH and winrm connections are supported.
By default, provisioners are executed when the defined resource is created and during updates or other parts of the lifecycle. It is intended to be used for bootstrapping a system. If a provisioner fails at creation time, the resource is marked as tainted. Terraform will plan to destroy and recreate the tainted resource at the next terraform apply
command.
By default, when a provisioner fails, it will also cause the terraform apply
command to fail. The on_failure
parameter can be used to specify different behavior.
|
|
Note: Expressions in provisioner blocks cannot refer to the parent resource by name. Use the self object to represent the provisioner’s parent resource (see previous example).
Additionally, provisioners can also be configured to run when the defined resource is destroyed. This is configured by specifying when = “destroy” within the provisioner block.
|
|
By default, a provisioner only runs at creation. To run a provisioned at deletion, it must be explicitly defined.