I use Terraform to manage my CloudFlare environment. Currently, all of my config exists in a single main.tf file. I have some separate .tf files that contain config for explicit purposes (ie: dns zones etc). At the end of the day, this environment is treated as one and is managed by myself. Baackend config is in provider.tf so the state file sits in Azure Blob Storage. My folder structure is pretty simple:
.
└── tf/
├── main.tf
├── provider.tf
├── outputs.tf
├── dns.zones.tf
I have some CloudFlare Teams Lists configured in main.tf, that uses csvdecode to read a CSV and populate the Teams List as needed, however, this CSV is not maintained by me. When that CSV is updated, I have an Azure Logic App and Azure DevOps build/release pipeline to trigger a workflow that will init/plan/apply the config. If the CSV is updated, ALL changes (not just the CSV changes) are processed. To avoid the possibility of unwanted changes being applied courtesy of the other party triggering the apply, I thought I’d split the role of Lists/CSV to be their own environment and have done this:
.
└── prod/
├── main.tf
├── provider.tf
├── outputs.tf
├── dns.zones.tf
└── lists/
├── main.tf
├── provider.tf
├── outputs.tf
lists/provider.tf has been updated with it’s own state file and lists/main.tf only contains the required list resources. The lists have been removed from the prod/main.tf. I run terraform init in the lists directory and I see all is well with the state file creating as needed.
However, when I run a terraform plan, the list resources will complain about missing resources…because they actually exist/managed by prod/main.tf.
To separate the management of the prod and lists environments, am I going about this the right way?
I’ve tried configuring modules, but I’m not sure that’s the right way to go.