Full-Time

Senior Partner Marketing Manager

Posted on 8/14/2025

HashiCorp

HashiCorp

1,001-5,000 employees

Cloud infrastructure and security management tools

No salary listed

Remote in UK

Remote

Willingness to travel within the region (~10-20%) and periodic travel to the United States (~2-3x per year)

Category
Growth & Marketing (2)
,
Required Skills
Lead Generation
Salesforce
Requirements
  • 6–8 years of field marketing / demand generation experience, ideally in enterprise software or cloud infrastructure, with at least 3-4 years working directly with partners
  • Proven ability to collaborate with sales teams and translate business needs into actionable marketing programs
  • Experience in marketing to enterprise technology buyers, with an understanding of buyer journeys and regional nuances across EMEA
  • Strong background in planning and executing physical and virtual events and partner campaigns
  • Highly organized, self-starter mindset with excellent project management and communication skills
  • Familiarity with Salesforce, Marketo (or similar marketing automation platforms), and marketing analytics tools
  • Fluent in English
  • Willingness to travel within the region (~10-20%) and periodic travel to the United States (~2-3x per year)
Responsibilities
  • Ensure top IBM partners can play an active role in promoting HashiCorp solutions and that current HashiCorp partners can take full advantage of the IBM partner program
  • Build and execute the HashiCorp partner marketing strategy for EMEA in connection with regional sales leaders and global marketing counterparts
  • Align marketing tactics with pipeline goals and regional sales priorities, helping to create more active sales opportunities, accelerate deal progression and improve conversion rates
  • Collaborate with global marketing counterparts to adapt campaigns for regional and local execution while maintaining consistent branding and messaging
  • Manage regional budgets and vendors to ensure high-quality execution and cost-effective results
  • Track and report on key metrics including campaign performance, lead generation, pipeline contribution, and ROI
  • Collaborate with Product Marketing, Digital Marketing, Global Events and Executive Programs to ensure regional messaging and programs are aligned
Desired Qualifications
  • Experience marketing to DevOps, platform teams, infrastructure, or security personas
  • Familiarity with HashiCorp tools or other open-source developer tools
  • MBA is a plus

HashiCorp provides software tools to automate, secure, and manage infrastructure across multi-cloud and hybrid environments. Its products cover provisioning, security, and governance for resources in public clouds (AWS, Google Cloud, Azure) and on-premises data centers, typically using infrastructure-as-code and policy-as-code workflows. The company offers both open-source editions and paid enterprise versions; open-source products build a broad user base while enterprise editions add additional features, support, and services for larger organizations. HashiCorp differentiates itself by focusing on multi-cloud orchestration and security management across the full infrastructure lifecycle, with a strong emphasis on automation, cost optimization, and governance. Its goal is to help organizations simplify operations, reduce cloud costs, and manage complex environments securely and compliantly.

Company Size

1,001-5,000

Company Stage

IPO

Headquarters

San Francisco, California

Founded

2012

Simplify Jobs

Simplify's Take

What believers are saying

  • Terraform Stacks GA automates multi-environment orchestration at HashiConf 2025.
  • Agent Skills library enables AI to generate compliant Terraform code instantly.
  • Google Cloud Provider 7.0 adds ephemeral resources preventing secrets in state.

What critics are saying

  • CVE-2025-8959 vulnerability erodes trust, accelerating OpenTofu migrations now.
  • IBM acquisition locks customers into watsonx, driving defections in 6 months.
  • OpenTofu incompatibility with Stacks forces enterprises to abandon HCP in 12 months.

What makes HashiCorp unique

  • HashiCorp leads multi-cloud automation with Terraform, Vault, and Packer tools.
  • Open-source foundation converts developers to enterprise subscribers seamlessly.
  • HCP Terraform unifies hybrid infrastructure lifecycle management across clouds.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Medical, dental & vision

Life & disability insurance

Flexible spending account (FSA)

Vacation and Other Leaves

401(k)

Family Expansion Benefit

Maternity and Parental Leave

Expanded Mental Health Support

Growth & Insights and Company News

Headcount

6 month growth

0%

1 year growth

2%

2 year growth

3%
The AI Journal Ltd
Apr 6th, 2026
The identity problem agent swarms can't ignore.

The identity problem agent swarms can't ignore. By jeff malnick is GM of developer & AI at 1password and a former VP at IBM and hashicorp. 6 minutes read I downloaded OpenClaw when it first came out mostly out of curiosity. The tool shows you an install prompt and asks if you want to proceed. I said no. A day later, its security team told me I was the subject of a security policy violation. The install script had npm-globally installed the tool on my machine anyway, regardless of what I clicked. I think the story matters for security teams and business leaders alike because it illustrates a problem every company is facing right now. Once OpenClaw, or any locally installed agent, is running on your machine, it has access to your filesystem, your SSH directory, your AWS credential directory, and your graphics card. Most people downloading it do not fully understand the breadth of access they are granting and what that access enables. In practice, they've installed a user-authorized backdoor. OpenClaw is not an outlier. It is a visible example of a problem agentic AI systems will have to confront as they move into broader use. Execution models define the risk. Recent demonstrations of agent swarms have shown very different execution models. In one model, agents operate inside tightly controlled environments, with explicit access boundaries and defined scope. Those demonstrations show what coordinated swarms can accomplish when the runtime is constrained. But the risk profile changes significantly when those controls are removed. OpenClaw represents a different execution model. It is built for broad accessibility and capability, and it relies on ambient access to local machines, networks, and whatever credentials happen to be present. Coordination emerges through shared channels rather than through tightly controlled runtime boundaries. From the outside, the capabilities may look similar. The execution model is what ultimately changes the risk profile, because in a controlled environment access is explicitly granted and tightly scoped. On a local machine, an agent inherits whatever the machine already has. That becomes hard to reconcile with identity systems built around static, point-in-time delegation. The delegation problem. Almost every agentic workflow today follows the same basic pattern: a human delegates access to an agent at a specific point in time, based on an assumption that the outcome will be deterministic. The agent will do what you intended when you authorized it, and that authorization will continue to reflect your original intent. In traditional systems, where execution paths are predictable and bounded, that assumption often holds. With LLM-driven agents, that predictability weakens. The behavior isn't fixed but evolves as the system encounters new inputs and context. Context can evolve mid-execution through prompt injection or other inputs, and the effective intent of the system can shift over time while the original authorization remains static. The authorization granted at a single moment can end up governing behavior that no longer reflects the original intent. The problem gets worse once you factor in speed, because identity and authorization protocols were designed assuming a human would always be in the loop. Someone approves access, an action occurs, and approval is revisited if something changes. Agents don't operate at human speed. They act continuously, at machine speed, often across many tasks simultaneously. If you require re-authorization at every decision point, you introduce friction that quickly erodes productivity gains that made swarms attractive in the first place. At swarm scale, especially in environments where agents operate across shared infrastructure and production systems, the impact of a single compromised agent does not remain contained. When authority is ambient rather than explicitly scoped, the failure propagates through the coordination layer itself. What the production version actually requires. For agent swarms to operate safely in production environments, several conditions need to hold at the same time. The runtime must support isolation, coordination, and state management without relying on implicit machine-level access. Each agent needs an explicit identity from creation through execution, rather than inheriting authority from the environment in which it runs. Every action must be attributable to a specific agent and the authorization that granted it, with the ability to revoke access when circumstances change. The human oversight problem is harder to resolve. Teams do not want to approve every agent action because that undercuts the productivity benefit. Removing humans from the loop entirely introduces a different risk profile. What's needed is a clear boundary between what the swarm is allowed to do on its own and what requires escalation. That boundary has to be enforced by the system at machine speed, not by someone watching a dashboard. What it looks like when it works. Consider a reliability incident in a cloud-native environment. A swarm might be tasked with investigating the degradation: one set of agents reviewing logs, another correlating metrics across services, another evaluating possible remediation steps. As severity increases, additional agents are assigned; as conditions stabilize, activity winds down. At each stage, actions remain bounded by explicitly delegated authority and subject to escalation if risk thresholds are crossed. The system must adapt to changing conditions without abandoning those boundaries. Agents may collaborate and divide work, but identity and authority remain explicitly defined throughout execution. Scaling behavior does not alter the underlying identity or access constraints. When an action falls within delegated authority, agents proceed using credentials that are scoped to the specific task and bounded in time. They don't rely on whatever long-lived access happens to be present on the host, so if risk conditions change or elevated access is required, the system pauses and escalates rather than continuing on inherited privileges. Instead of relying on ambient trust, policy enforcement is continuous, and access remains revocable throughout the swarm's lifecycle. The difference from uncontrolled swarm models isn't cosmetic. It's structural because it affects how identity, access, and failure behave as the system scales. In systems built on ambient authority, agents inherit whatever access the host environment provides, which can create compounding risk as scale increases. In systems built on explicit identity and bounded delegation, scaling does not require surrendering control. The intent problem. Authentication does not address what happens once an agent begins acting under delegated authority. As execution continues, context can shift and new inputs can change the effective intent of the system, while the original authorization remains static. Most identity and access models built around point-in-time grants are not well suited to reassessing that intent at machine speed. This results in a widening gap between what was approved and what the system is actually doing. Enforce human reauthorization strictly and you constrain the productivity gains that made swarms worth building. Relax it and access persists without any ongoing assessment of whether the agent's behavior still reflects the original intent. The system ends up oscillating between friction and blind trust - and neither position is workable at production scale. What is required is a way to translate a high-level access decision - granting a swarm access to defined resources for a specific purpose - into scoped, time-bound authority that remains revocable throughout execution. Current identity and access models are not designed to express or enforce that kind of delegation. Mitigating near-term risk. For those experimenting with OpenClaw or similar tools, isolation matters. Use dedicated sandbox hardware rather than a primary work machine, and create fresh accounts that are not connected to corporate systems. Resources should be separated and access tightly scoped. Anything the tool can reach should be treated as potentially exposed, because filesystem-level access provides broad visibility into local credentials and data. For security teams considering agent governance, the central issue remains ambient authority. Agents should operate with explicit identity rather than inheriting access from the host environment, and credentials should be scoped to defined tasks and bounded in time. Logging must preserve attribution at the agent level so actions can be traced to a specific identity and authorization decision. Donella Meadows wrote in Thinking in Systems that changing outcomes requires changing the system that produces them. The developers building tools like OpenClaw and the users experimenting with them are responding to incentives that prioritize productivity gains. Security infrastructure has not evolved at the same pace, and the gap is becoming harder to ignore. If agent swarms are to deliver their productivity gains in production environments, authentication and authorization systems will need to evolve alongside them. Agents must be able to authenticate as distinct identities, operate with tightly scoped authority, and remain subject to continuous policy enforcement without constant human intervention. The goal is not to limit automation. It's to make automation governable at scale. 2 minutes ago 3 hours ago 4 hours ago

Ameeba
Mar 30th, 2026
CVE-2025-8959: unauthorized read access vulnerability in HashiCorp's go-getter library.

CVE-2025-8959: unauthorized read access vulnerability in HashiCorp's go-getter library. March 30, 2026 HashiCorp's go-getter library, widely used for file downloading, has been found to be vulnerable to symlink attacks, potentially resulting in unauthorized read access beyond the designated directory boundaries. This vulnerability, designated as CVE-2025-8959, possesses a significant threat to system security and data integrity as it can lead to system compromise or data leakage. Vulnerability Summary CVE ID: CVE-2025-8959 Severity: High (7.5 CVSS Score) Attack Vector: Symlink Attack Privileges Required: None User Interaction: None Impact: Unauthorized read access beyond the designated boundaries, leading to potential system compromise or data leakage. Affected Products Escape the surveillance era. Most apps won't tell you the truth. They're part of the problem. Phone numbers. Emails. Profiles. Logs. It's all fuel for surveillance. Ameeba Chat gives you a way out. * - No phone number * - No email * - No personal info * - Anonymous aliases * - End-to-end encrypted Chat without a trace. Product | Affected Versions HashiCorp go-getter | < 1.7.9 How the Exploit Works The vulnerability is exploited through a symlink attack, where a malicious actor creates a symbolic link to a file outside the designated directory. This allows the attacker to bypass the directory restrictions, gaining read access to files that should be inaccessible. Any product or system using a vulnerable version of the go-getter library could be at risk, potentially exposing sensitive information or system files. Conceptual Example Code A conceptual example of the exploit in a shell command could be as follows: # Attacker creates a symlink to a file outside the designated directory ln -s /etc/passwd ./symlink # Attacker uses go-getter to download the symlink, resulting in unauthorized access to /etc/passwd go-getter ./symlink /path/to/download Mitigation Users are advised to upgrade to go-getter version 1.7.9 or later, which contains a patch for this vulnerability. If an upgrade is not immediately possible, a potential temporary mitigation could involve the use of a Web Application Firewall (WAF) or Intrusion Detection System (IDS) to monitor and block suspicious activity. However, these should not be considered long-term solutions, and an upgrade to a patched version of the software should be undertaken as soon as possible. Disclaimer: The information and code presented in this article are provided for educational and defensive cybersecurity purposes only. Any conceptual or pseudocode examples are simplified representations intended to raise awareness and promote secure development and system configuration practices. Do not use this information to attempt unauthorized access or exploit vulnerabilities on systems that you do not own or have explicit permission to test. Ameeba and its authors do not endorse or condone malicious behavior and are not responsible for misuse of the content. Always follow ethical hacking guidelines, responsible disclosure practices, and local laws.

HashiCorp
Feb 2nd, 2026
Introducing HashiCorp Agent Skills

Introducing HashiCorp Agent Skills. HashiCorp has released an open library of Agent Skills to help teams build and manage infrastructure faster with AI. Today, HashiCorp is announcing HashiCorp Agent Skills, a repository of Agent Skills and Claude Code plugins for HashiCorp products. At launch, this includes Skills for Terraform and Packer. These Skills give AI assistants specialized HashiCorp product knowledge, including plugin framework architectures, schema definitions, and up-to-date best practices. The initial HashiCorp Agent Skills pack includes Skills that can: * Follow HashiCorp style conventions when generating Terraform code * Write and run Terraform tests * Create orchestrations with Terraform Stacks * Help build Terraform providers according to best practices * Refactor Terraform modules * Build AWS, Azure, and Windows images with Packer * and more In this post, HashiCorp'll cover what Agent Skills are, how these skills improve your infrastructure workflow, and how to install them in your AI assistant. What are Agent Skills? Agent Skills are based on an open standard for packaging domain expertise into portable, reusable instructions that AI agents can load on demand. Developed by Anthropic and released as an open format, skills solve a fundamental problem: AI assistants often lack the specific technical context needed to perform complex tasks reliably. A note on the Model Context Protocol (MCP). You might be wondering how this differs from MCP. Agent Skills and MCP are complementary technologies. MCP is the "pipe" or server interface that connects data to an AI, while Agent Skills are the "textbooks" of knowledge. You can use them together to create a powerful, context-aware assistant. Each Skill is a folder containing instructions, reference materials, and resources. When you load a Skill, your AI assistant gains access to curated expertise it can apply to your work, significantly reducing hallucinations and adhering to strict architectural standards. Skills for every stage of your devops journey. The HashiCorp Agent Skills package currently includes Skills that address the most common challenges that Terraform and Packer users face, with more planned for the future: Building a new provider: Creating a Terraform provider requires understanding of the plugin framework, resource lifecycle methods, schema design, and testing patterns. Its provider development Skills give AI assistants the context to guide you through the entire process, from scaffolding a new provider to implementing complex data sources to testing, all without having to point your AI to different documents manually or risk bad practices and nonsensical results from creeping in. Maintaining an existing provider: Provider maintenance involves handling breaking changes, updating to new framework versions, and addressing community issues. The Terraform Skills also help AI assistants understand your existing codebase and suggest changes that follow established patterns. Generating quality Terraform code: HashiCorp has baked its coding conventions into AI workflows with its Terraform style guide Skill. That means when you start generating Terraform configurations with the style guide Skill, the code will follow HashiCorp's documented style conventions, rather than potentially using conventions from code found in the wild. Breaking down monoliths: The refactor module Skill helps refactor monolithic Terraform configurations into modules, making your configurations more reusable and manageable. Using Terraform Stacks: Terraform Stacks are a configuration layer in HCP Terraform and Terraform Enterprise designed to manage complex, multi-environment, multi-region infrastructure as a single, cohesive unit. With the Terraform Stacks Skill, you can simplify the coding process of Stack components without as many general-LLM pitfalls. Building machine images with Packer: Packer Skills help users build golden images across AWS, Azure, and Windows with proper builder configurations, provisioners, platform-specific patterns, and HCP Packer integration for image lifecycle management. Evaluating its Agent Skills. A key part of creating its Skills was evaluating their efficacy and iteratively improving them based on evaluation data. Skills can significantly improve how an agent completes a task when written well. Written poorly, they may consume too much context with little gain. They can also lack critical information or be phrased in ways that different models interpret inconsistently. HashiCorp partnered with Tessl to evaluate and improve its Agent Skills. HashiCorp used two evaluation techniques: * Review evals, which test Skill structure against Anthropic's best practices * Task evals, which run agents through real tasks with and without the Skill to assess results You can see the full review eval results on its listing here. Install skills in seconds. HashiCorp designed the installation process to be as simple as possible. You have a few options: * Using npx, run: * npx skills add hashicorp/agents-skills * Using Tessl, run: * npm i -g @tessl/cli && tessl i github:hashicorp/agent-skills * For Claude Code specifically, run: * /plugin marketplace add hashicorp/agent-skills * /plugin install terraform-provider-development@hashicorp Any of these methods will install the Skill files directly into your AI assistant's configuration directory, with no manual copying or configuration editing. What's next. This initial release covers Terraform and Packer, but HashiCorp plan to expand the library to cover additional HashiCorp products very soon. HashiCorp also welcome contributions from the community. If you have expertise to share or ideas for new Skills, visit its repository to get involved. Let HashiCorp know how these Skills help your workflow and tell HashiCorp what you'd like to see next by opening an issue in the repository. More resources. * Keep up to date with HashiCorp Agent Skills: Follow the GitHub Repository. Install via /plugin, Tessl, or npx. * Learn the standard: Read about Agent Skills to understand how the open format works. * Scale your impact: Sign up for HCP Terraform and HCP Packer to manage your new infrastructure configurations at scale.

Arnav Sharma
Nov 20th, 2025
Terraform Stacks: What's for Infra at Scale

Terraform Stacks: what's for infra at scale. Managing Terraform at scale has consistently presented operational challenges. The typical patterns include workspaces that proliferate uncontrollably, duplicated configuration across environments, fragile orchestration scripts, and the persistent risk that a single misapplied change will propagate through production infrastructure. At HashiConf 2025, HashiCorp released Terraform Stacks to General Availability in HCP Terraform across all RUM-based plans. This represents a fundamental shift in configuration architecture, enabling teams to manage collections of modules and deployments as unified, orchestrated units across environments, regions, accounts, and cloud providers. The infrastructure management problem. * Monolithic root modules create state files that become unmanageable. Plan operations slow to a crawl, and the blast radius of any change encompasses the entire infrastructure footprint. * Workspace-based architectures require external orchestration through Terragrunt, Terramate, or custom scripts. These solutions function adequately but introduce maintenance overhead for an entire orchestration layer that exists outside the Terraform ecosystem. * Data source dependencies between workspaces create brittle coupling. Remote state references require manual ordering, and changes don't propagate automatically. This pattern has caused production incidents when dependent workspaces weren't updated in the correct sequence. Platform teams spend disproportionate time maintaining orchestration infrastructure rather than building reusable components. Stacks inverts this model. Infrastructure is defined once as components (reusable modules), then deployment targets specify where and how many times to instantiate it. Terraform manages orchestration, dependency resolution, and change propagation automatically. Core architecture concepts. * Components represent reusable infrastructure modules instantiated within a stack. These function similarly to child modules in traditional Terraform. * Deployments are individual instances of the complete stack configuration. A deployment might represent an environment (production, staging, development) or a geographic region (Arnav Sharma-east-1, eu-west-1). Each deployment uses identical component definitions with variations driven exclusively by input variables. * Stacks encompass the complete set of components and deployments, managed as a unified entity within HCP Terraform. * Linked Stacks enable cross-stack dependencies where one stack consumes outputs from another (networking stack outputs consumed by application stacks). Critical architectural principle: all deployments within a stack share identical component configurations. Environmental differences derive solely from input variables, enforcing consistency across the infrastructure. Configuration structure. *.tfcomponent.hcl files define components (infrastructure resources and their relationships). *.tfdeployment.hcl files define deployment targets (where infrastructure gets provisioned). This configuration defines three independent deployments managed as a single stack in HCP Terraform. Module updates automatically trigger plan operations across all affected deployments. General Availability features. Automatic dependency resolution. HCP Terraform constructs dependency graphs across components and linked stacks, eliminating custom orchestration scripts. Deployment groups (premium). Deployments can be logically grouped (canaries, geographic regions, business units) with defined orchestration rules: sequential application, parallel execution, conditional auto-approval based on change scope. Partial planning and deferred changes. Multi-region deployments frequently encounter scenarios where downstream components require values unavailable until upstream resources complete provisioning. Stacks support partial planning, applying known configurations while deferring dependent resources. Changes propagate automatically through the dependency chain. CLI integration. The standalone terraform-stacks-cli has been deprecated. Functionality is now integrated into the standard Terraform CLI: Self-Hosted agent support. Stacks can execute on self-hosted infrastructure for air-gapped environments or regulatory compliance requirements. Billing model. Stack resources count toward standard RUM (Resources Under Management) metrics with no additional licensing components. Migration strategy. Stacks and traditional workspaces coexist within projects. Migration can proceed incrementally: * Extract shared modules as stack components * Migrate applications individually * Maintain hybrid architectures during transition Organizations using Terragrunt or Terramate will find most orchestration capabilities available natively through Stacks. Several enterprises have initiated migration roadmaps based on reduced operational overhead from eliminating wrapper tooling. The GA migration documentation provides detailed transition procedures. OpenTofu compatibility. Terraform Stacks require HCP Terraform (cloud or Enterprise platform). The functionality is not available in open-source Terraform CLI or OpenTofu. OpenTofu has discussed similar orchestration concepts (GitHub issue #931) but has not released equivalent functionality. Organizations requiring fully open-source toolchains will continue using Terragrunt, Terramate, or similar orchestration solutions. For teams on HCP Terraform, Stacks provide significant productivity improvements over external orchestration tools. Implementation steps. * Enable Stacks in organization settings (Settings | General | Stacks) * Create or select a target project * Review the official tutorial: https://developer.hashicorp.com/terraform/tutorials/cloud/stacks-deploy * Reference the language documentation: https://developer.hashicorp.com/terraform/language/stacks Operational impact. Terraform Stacks address the orchestration gap that previously required external tooling at enterprise scale. The abstraction layer enables platform teams to provide self-service infrastructure provisioning with enforced consistency across deployments. For teams managing infrastructure across multiple environments, regions, or accounts, Stacks reduce operational complexity while improving reliability through automated dependency management and change propagation. The infrastructure-as-code ecosystem has evolved substantially with this release.

InfoQ
Oct 18th, 2025
Terraform Google Cloud Provider 7.0 Reaches General Availability

Terraform Google Cloud provider 7.0 reaches general availability. * Mark Silvester Platform and Architecture Manager HashiCorp has announced the general availability of version 7.0 of the Terraform provider for Google Cloud, introducing new features focused on improving security and validation across infrastructure code. In the announcement, the company said the release "continues to expand on these security-first features" and is intended to help teams safely and predictably manage their Google Cloud resources at scale. The release aligns with Google's broader support for Terraform as part of its Infrastructure Manager documentation, which provides official guidance for deploying resources on Google Cloud. The provider has now surpassed 1.4 billion downloads and supports more than 800 resources and 300 data sources. Version 7.0 builds on capabilities introduced in recent Terraform releases, including ephemeral resources and write-only attributes, both designed to keep sensitive data out of Terraform state files. Ephemeral resources, supported since Terraform 1.10, allow teams to generate short-lived credentials that never touch persistent state. According to the announcement, the update adds support for new ephemeral types, including google_service_account_access_token, enabling temporary credentials to be used securely during plan or apply operations. Write-only attributes, introduced in Terraform 1.11, extend this concept by allowing secrets such as passwords or API keys to be sent to the API without being recorded. The company added that the release expands the use of write-only attributes across additional resources, ensuring that sensitive values remain transient and confidential. Version 7.0 also enforces stricter schema validation to catch configuration errors earlier. Attributes that the Google Cloud API effectively requires are now treated as mandatory, meaning validation happens during planning rather than at apply time. Some attributes have been deprecated or renamed to align with current Google Cloud APIs, prompting users to review configurations before upgrading. As a major version, the release introduces breaking changes. The official upgrade guide advises migrating first to the latest 6.x release and testing in non-production environments. The release notes confirm these changes, listing the removal of deprecated resources, such as google_beyondcorp_application, and new additions, like google_network_services_wasm_plugin. Florin Lungu, a maintainer of the provider, described the release on LinkedIn as one that "introduces ephemeral resources, write-only attributes, and validation logic", reflecting a broader shift toward stronger security and reliability in Terraform's cloud integrations. For organisations managing infrastructure at scale, version 7.0 delivers meaningful improvements in how secrets and configurations are handled. Secrets are less likely to leak through Terraform state, and validation now catches errors earlier in the lifecycle. While migration may require effort, the enhanced security model is likely to appeal to teams seeking greater assurance over infrastructure automation. Mark Silvester is a Platform and Architecture Manager working at griffiths waite, a software consultancy based in birmingham, UK. Responsible for platform strategy, with a focus on delivering innovative solutions for enterprise clients. Areas of interest include cloud-native technologies, devops practices, and the practical application of AI in engineering and architecture. This content is in the devops topic. Related sponsors.

INACTIVE