Can Terraform watch a directory for changes?

terraform triggers
terraform file example
terraform download file from url
terraform local-exec bash script
terraform archive_file
terraform trigger on change
terraform null_resource depends_on
terraform provisioner directory

I want to monitor a directory of files, and if one of them changes, to re-upload and run some other tasks. My previous solution involved monitoring the individual files, but this is error-prone as some files may be forgotten:

resource "null_resource" "deploy_files" {    
  triggers = {
    file1 = "${sha1(file("my-dir/file1"))}"
    file2 = "${sha1(file("my-dir/file2"))}"
    file3 = "${sha1(file("my-dir/file3"))}"
    # have I forgotten one?
  }

  # Copy files then run a remote script.
  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

My next solution is to take a hash of the directory structure in one resource, and use this hash as a trigger in the second:

resource "null_resource" "watch_dir" {
  triggers = {
    always = "${uuid()}"
  }

  provisioner "local-exec" {
    command = "find my-dir  -type f -print0 | xargs -0 sha1sum | sha1sum > mydir-checksum"
  }
}


resource "null_resource" "deploy_files" {    
  triggers = {
    file1 = "${sha1(file("mydir-checksum"))}"
  }

  # Copy files then run a remote script.
  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

This works okay, except changes to mydir-checksum are only picked up after the first apply. So I need to apply twice, which isn't great. It's a bit of a kludge.

I can't see a more obvious way to monitor an entire directory for changes in content. Is there a standard way to do this?

You can use the "archive_file" data source:

data "archive_file" "init" {
  type        = "zip"
  source_dir = "data/"
  output_path = "data.zip"
}

resource "null_resource" "provision-builder" {
  triggers = {
    src_hash = "${data.archive_file.init.output_sha}"
  }

  provisioner "local-exec" {
    command = "echo Touché"
  }
}

The null resource will be reprovisioned if and only if the hash of the archive has changed. The archive will be rebuilt during refresh whenever the contents of source_dir (in this example data/) change.

"file" provisioner doesn't detect changes when copying a directory , Changes to files within that directory do not trigger a rebuild - or in other words, terraform plan says there are no changes to make. (Currently  Terraform Active Directory Provider This is the repository for the Terraform Active Directory Provider, which one can use with Terraform to work with Active Directory. Coverage is currently only limited to a one resource only computer, but in the coming months we are planning release coverage for most essential Active Directory workflows.

Provisioner: file, The file provisioner is used to copy files or directories from the machine When using the winrm connection type the destination directory will be created for you  » Resource: aws_directory_service_directory Provides a Simple or Managed Microsoft directory in AWS Directory Service. Note: All arguments including the password and customer username will be stored in the raw state as plain-text.

It doesn't seem that Terraform provide any directory tree traversal function, so the only solution that I can think of is to use some kind of external tooling to do so, like Make:

all: tf.plan

tf.plan: hash *.tf
        terraform plan -o $@

hash: some/dir
        find $^ -type f -exec sha1sum {} + > $@

.PHONY: all hash

and then in your Terraform file:

resource "null_resource" "deploy_files" {    
  triggers = {
    file1 = "${file("hash")}"
  }

  # Copy files then run a remote script.
  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

Command: plan, The plan can be saved using `-out`, and then provided to `terraform apply` to ensure only Infrastructure Software · Network · VCS · Monitor & System Management run before committing a change to version control, to create confidence that it will By default, plan requires no flags and looks in the current directory for the  Terraform uses this local state to create plans and make changes to your infrastructure. Prior to any operation, Terraform does a refresh to update the state with the real infrastructure. For more information on why Terraform requires state and why Terraform cannot function without state, please see the page state purpose.

Command: get, The modules are downloaded into a local .terraform folder. If a module is already downloaded and the -update flag is not set, Terraform will do nothing. This does not modify infrastructure, but does modify the state file. If the state is changed, this may cause changes to occur during the next plan or apply. » Usage Usage: terraform refresh [options] [dir] By default, refresh requires no flags and looks in the current directory for the configuration and state file to refresh.

Command: refresh, Infrastructure Software · Network · VCS · Monitor & System Management This can be used to detect any drift from the last-known state, and to update the If the state is changed, this may cause changes to occur during the next plan or apply. By default, refresh requires no flags and looks in the current directory for the  For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected. The optional -out argument can be used to save the generated plan to a file for later execution with terraform apply , which can be useful when running Terraform in automation .

The Core Terraform Workflow - Guides, This core workflow is a loop; the next time you want to make changes, you start Depending on the change, sometimes team members will want to watch the  Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Comments
  • Use make to generate the hash and then use this as trigger? Because I am not aware of any other solution. Also find my-dir -type f -exec sha1sum + should be faster.
  • You mean make as an external script that to run before terraform?
  • This is an interesting question, and I’ve never used Terraform in this way. But I wonder if this is just something Terraform wasn’t made to do - it manages infrastructure; file system handling seems like a second-class citizen. I would probably suggest this is where you want a wrapper script around Terraform to shape it to your needs.
  • @Joe yes. You could have make task the would check and generate sum file for your directory and then use that file content as a trigger.
  • Thanks both. Agreed, the fact that this isn't directly supported might be a hint. But there are lots of examples that involve watching files and watching a dir of files isn't so different. The files in question are Docker Stack deployment files plus supporting config, which I think are in Terraform territory.
  • That's really interesting! I think this depends on the determinism / repeatbility of the output zipped bytes with regard to the input. Is this something that works by accident or is the stability of the zip defined?
  • Agreed, I haven't thought about that. Luckily, my implementation doesn't add timestamps or anything to the zip file itself, but even such things as updated timestamps of the zipped files will invalidate the data, change the hash and invalidate the resource.
  • Brilliant, thanks very much. I saw the external resource thing, then I saw the JSON requirement and dismissed it. I'll give this approach a go.
  • @Joe, I have updated the checksum logic to neutralise the find effect across the platform. Please take a look. Thanks!
  • Thanks for your answer. I'll think about it.