I will say I've had a lot of success with AI and boiler plate HCL.
I try to avoid modules out of the gate until I know the shape of a system and the lifecycles of things and I've been pleasantly surprised with how well the AI agents get AWS things correct out of the gate with HCL.
This should super charge this workflow since it should be able to pull out the provider docs / code for the specific version in use from the lockfile.
Me too. Having not done it for a couple of years, I got a full private gke vpc system and the live config etc incl argocd deployed and managed by tf all setup in like 3 or 4 days. I know it’s meant to be hours… but real life.
What I enjoyed using cursor was when shit went wrong it could generate the gcloud cli commands etc to interrogate, add the results of that to the agent feed then continue.
Finding the right command every time is the real time saver.
Ok, it’s probably something that a developer should know how to do, but who remembers every single command for cloud providers cli?
Querying the resources actual state makes these AI infra tools so powerful, I found them so useful even when I had to manage Hetzner based terraform projects.
I'd think this would work, as the 4 tools listed are about retrieving information to give agents more context of correct providers and modules. Given terragrunt works with terraform directly, I'd think that it would help with it as well, just add rules/prompts that are explicit about the code being generated being in terra grunt file structure / with terragrunt commands.
No clue why you would say its a major source of danger. We have plenty of mechanism in place to prevent issues and due to the nature of IaC and how we handle state, we could literlay tear down everything and are back up running in around 2h with a complex system with 10 componentes based on k8s.
The back side of that coin is that it similarly just(?) seems to be a fancy way of feeding the terraform provider docs to the LLM, which was already available via `tofu provider schema -json` without all this http business. IMHO the fields in the provider binary that don't have populated "description" fields are a bug
I will say I've had a lot of success with AI and boiler plate HCL.
I try to avoid modules out of the gate until I know the shape of a system and the lifecycles of things and I've been pleasantly surprised with how well the AI agents get AWS things correct out of the gate with HCL.
This should super charge this workflow since it should be able to pull out the provider docs / code for the specific version in use from the lockfile.
Me too. Having not done it for a couple of years, I got a full private gke vpc system and the live config etc incl argocd deployed and managed by tf all setup in like 3 or 4 days. I know it’s meant to be hours… but real life.
What I enjoyed using cursor was when shit went wrong it could generate the gcloud cli commands etc to interrogate, add the results of that to the agent feed then continue.
Finding the right command every time is the real time saver.
Ok, it’s probably something that a developer should know how to do, but who remembers every single command for cloud providers cli?
Querying the resources actual state makes these AI infra tools so powerful, I found them so useful even when I had to manage Hetzner based terraform projects.
100%. The real unlock/augmentation is not having to remember everything to type.
This has to be the most complicated method of reading docs ever created?
(I write as someone who really likes Terraform, fwiw.)
Does anyone know of an MCP server like this that can work with Terragrunt?
I'd think this would work, as the 4 tools listed are about retrieving information to give agents more context of correct providers and modules. Given terragrunt works with terraform directly, I'd think that it would help with it as well, just add rules/prompts that are explicit about the code being generated being in terra grunt file structure / with terragrunt commands.
And it’s MPL, so you’re free to use it with OpenTofu as well (even if competing with Hashicorp).
But as mdaniel notes in a sibling thread, this doesn’t seem to do much at this point.
Maybe i dont understand this to well but isnt this basically a wrapper for github.com/mark3labs/mcp-go/server
I dunno about this. Infra-as-code has always been a major source of danger. Now we want to put AI on it?
There's zero danger writing Terraform. The danger is running `apply`.
whats the point of writing tf if you never mean to apply?
You apply with a human in the loop.
We run 100% IaC and are very happy with it.
No clue why you would say its a major source of danger. We have plenty of mechanism in place to prevent issues and due to the nature of IaC and how we handle state, we could literlay tear down everything and are back up running in around 2h with a complex system with 10 componentes based on k8s.
Initially thought the MCP acronym would stand for "Master control program". Was disappointed.
Then you may enjoy Introduction to MCP Security - https://news.ycombinator.com/item?id=44015162 - May, 2025 (2 comments)
Funny anecdote, I asked Claude 3.7 to explain MCP to me and it went on blabbering on something about Master Control Programs.
[dead]
[flagged]
Oh, just what I needed to raise my RUMs and send my Hashicorp bill through the roof!
> Oh, just what I needed to raise my RUMs and send my Hashicorp bill through the roof!
out of curiosity, what are you paying them for? most orgs that use tf dont
I didn't downvote you, but this thing is actually back to their MPL-2 roots (for now!) and I don't see any references to github.com/hashicorp/terraform in https://github.com/hashicorp/terraform-mcp-server/blob/v0.1.... and I would guess patching https://github.com/hashicorp/terraform-mcp-server/blob/v0.1.... to accept an env-var or config flag or whatever would decouple it from their centralization
The back side of that coin is that it similarly just(?) seems to be a fancy way of feeding the terraform provider docs to the LLM, which was already available via `tofu provider schema -json` without all this http business. IMHO the fields in the provider binary that don't have populated "description" fields are a bug