Full-Time

Platform/Infrastructure Engineer

Posted on 8/21/2025

LangChain

LangChain

201-500 employees

Open-source framework for LLM-powered apps

Compensation Overview

$145k - $195k/yr

+ Equity

Boston, MA, USA + 2 more

More locations: San Francisco, CA, USA | New York, NY, USA

In Person

Multiple in-office locations in the United States: San Francisco (CA), New York (NY), and Boston (MA). No remote option specified; applicants should be based in or willing to relocate to one of these cities. Company offices exist elsewhere, but this role is located in one of the listed cities.

Category
DevOps & Infrastructure (1)
Required Skills
Kubernetes
Microsoft Azure
Docker
AWS
Helm
Google Cloud Platform
Requirements
  • Design and Scale Infrastructure: Build and maintain scalable, high-throughput infrastructure solutions using Kubernetes, Helm, Docker, and multi-cloud environments (AWS, Azure, GCP) to support flagship SaaS products like LangSmith and LangGraph Platform.
  • Drive Reliability and Performance: Ensure platform reliability, security, and performance through robust monitoring, alerting, automated recovery systems, and proactive maintenance, including performance tuning and database optimization.
  • Contribute to Platform Strategy: Influence infrastructure strategy, tooling, and operational practices as the organization scales from startup to enterprise.
  • Enable Secure, Efficient Operations: Implement security best practices, compliance requirements, and infrastructure cost optimization strategies while architecting for high availability, disaster recovery, and resource efficiency.
  • Develop Automation and CI/CD Pipelines: Build and optimize CI/CD pipelines, infrastructure as code, and deployment automation strategies to streamline application delivery.
  • Support Customer Deployments: Create and maintain deployment solutions and monitoring tools for customer-hosted environments, and collaborate with engineering teams on application rollout and support.
  • Participate in Incident Response: Take part in the on-call rotation with a focus on learning, automation, and continuous improvement of incident response processes.
  • Document and Evolve Best Practices: Maintain comprehensive infrastructure documentation and stay up to date with emerging technologies and best practices in cloud-native systems.
Responsibilities
  • Design and Scale Infrastructure: Build and maintain scalable, high-throughput infrastructure solutions using Kubernetes, Helm, Docker, and multi-cloud environments (AWS, Azure, GCP) to support flagship SaaS products like LangSmith and LangGraph Platform.
  • Drive Reliability and Performance: Ensure platform reliability, security, and performance through robust monitoring, alerting, automated recovery systems, and proactive maintenance, including performance tuning and database optimization.
  • Contribute to Platform Strategy: Influence infrastructure strategy, tooling, and operational practices as the organization scales from startup to enterprise.
  • Enable Secure, Efficient Operations: Implement security best practices, compliance requirements, and infrastructure cost optimization strategies while architecting for high availability, disaster recovery, and resource efficiency.
  • Develop Automation and CI/CD Pipelines: Build and optimize CI/CD pipelines, infrastructure as code, and deployment automation strategies to streamline application delivery.
  • Support Customer Deployments: Create and maintain deployment solutions and monitoring tools for customer-hosted environments, and collaborate with engineering teams on application rollout and support.
  • Participate in Incident Response: Take part in the on-call rotation with a focus on learning, automation, and continuous improvement of incident response processes.
  • Document and Evolve Best Practices: Maintain comprehensive infrastructure documentation and stay up to date with emerging technologies and best practices in cloud-native systems.
Desired Qualifications
  • Proficiency with analytical databases (e.g. ClickHouse)
  • Background in high-growth startups
  • Previous experience in AI/ML infrastructure

LangChain provides an open-source framework for building applications powered by large language models (LLMs). It offers a modular toolkit with components like Model I/O, Data Connection, Chains, Agents, Memory, and Callbacks, allowing developers to create apps that can reason about and act on external data sources and APIs. The product works by letting users assemble chains of LLM calls, connect LLMs to data sources, enable agents to make decisions and use tools, persist state across interactions, and monitor activity through callbacks. This modular design differentiates LangChain from competitors by its emphasis on flexibility, extensibility, and open-source collaboration, enabling a wide range of users—from individuals to large enterprises—to tailor LLM-powered applications. The company's goal is to simplify the development and deployment of AI-powered applications, providing an adaptable framework that handles data integration, reasoning, and action for diverse use cases.

Company Size

201-500

Company Stage

Series B

Total Funding

$160M

Headquarters

San Francisco, California

Founded

2023

Simplify Jobs

Simplify's Take

What believers are saying

  • Axtria partnership on April 29, 2026, scales AI agents in pharma with GxP compliance.
  • Crawlbase integration on April 24, 2026, enables real-time web data for grounded agents.
  • DataCamp AI Engineering track boosts developer adoption of LangSmith and LangGraph.

What critics are saying

  • LlamaIndex overtakes LangChain in GitHub stars, eroding RAG framework dominance.
  • OpenAI o3 native agent APIs in Q3 2026 obsolete LangChain abstractions.
  • AWS Bedrock Agents commoditize orchestration, capturing enterprises in 6-12 months.

What makes LangChain unique

  • LangGraph enables managed infrastructure for long-running stateful AI agents since May 2025.
  • LangSmith ingests 1 billion events daily, serving 35% of Fortune 500.
  • Over 600 integrations from 3K community members power modular LLM orchestration.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-2%

1 year growth

-1%

2 year growth

0%
LangChain
Apr 9th, 2026
Previewing Interrupt 2026: agents at enterprise scale.

Previewing Interrupt 2026: agents at enterprise scale. This year, Langchain is doing it again. Interrupt 2026 is May 13-14 at The Midway in San Francisco, and the lineup, the format, and the scale have all leveled up. Last May, 800 of you came to The Midway in San Francisco for the inaugural Interrupt conference. Teams from Cisco, Uber, J.P. Morgan, Replit, LinkedIn, and BlackRock got on stage and told the truth about what it actually takes to put agents in production. Langchain launched LangSmith Deployment, shipped a redesigned LangSmith Studio, and rolled out new observability tools in LangSmith. If you were there, you know the energy. If you weren't, here's a taste of what you missed: This year, Langchain is doing it again. Interrupt 2026 is May 13-14 at The Midway in San Francisco, and the lineup, the format, and the scale have all leveled up. 2026 is about agents at enterprise scale. Last year's question was "can agents work in production?" The answer, across dozens of talks, was a definitive yes. This year's question is different: how do you make them work at enterprise scale - and what does the team, the tooling, and the infrastructure look like when agents aren't a proof of concept anymore? Interrupt 2026 is about the how. How are the largest companies in the world building agent platforms? How are they evaluating performance when the stakes are high? How are they structuring teams around agent engineering as a discipline? And how is the ecosystem - from model providers to infrastructure - evolving to support what comes next? Keynotes and fireside chats with. Harrison Chase, Co-founder and CEO of LangChain, will open each day of Interrupt with a keynote on what Langchain has learned from working with thousands of teams shipping agents over the past year, where its products are headed, and predictions about the industry. Andrew Ng, founder of DeepLearning.AI and one of the most influential voices in AI, will share his view on what's coming next for agents, and what it means for the developers and teams building them today. Chirantan "CJ" Desai, CEO of MongoDB, will sit down with Harrison for a fireside chat on how the world's largest enterprises are building with agents, and what the data layer looks like when agents move from experiments to production systems. Aaron Levie, Co-founder and CEO of Box, the intelligent content management platform he launched in 2005 is a vocal advocate for AI-driven enterprise software and frequently writes and speaks on how organizations can use AI agents to transform workflows. What you'll hear from the stage. Langchain is bringing teams who are deep in production and running agents at real scale with real consequences. Here's a preview of what's on the agenda: Lyft is talking about evals and how they are building evals around their specific product policies, user flows, and edge cases with LangSmith. Nick Ung from Lyft's Safety and Customer Care team will walk through how they built an evaluation system that actually tells them whether their agents are working, and how they close the feedback loop between failed traces, their ops team, and engineering. Apple is sharing how they built a low-code agent platform serving 15,000+ employees. Their team rethought how LangGraph constructs graphs at runtime to support dynamic, low-code agent building at a scale that required rearchitecting assumptions about graph construction, caching, and context management. LinkedIn is presenting a solution to one of the biggest problems today: recruiting. Recruiting is one of the most time-intensive workflows in any organization - especially for small and mid-size businesses without dedicated hiring teams. LinkedIn's engineering team tackled this head-on by building an AI recruiting agent with LangSmith and LangGraph. Now thanks to their recruiting agent, the team is hiring 10x faster. You'll also hear production stories from the world's largest enterprises including Toyota, LATAM Airlines, and Honeywell, along with tech-native companies including Coinbase, Chime, Rippling, monday.com, and Clay. Beyond the talks. Interrupt hosts two full days designed around learning, building, and connecting. AMAs with product leaders. Sit down with its engineers building LangSmith, Deep Agents, LangGraph, and LangChain. Ask them anything - about the roadmap, about your architecture, about the problem you've been stuck on for weeks. Last year's product announcements came out of conversations exactly like these, and Langchain expect this year to be no different. Demo stations. Get hands-on with the latest across the LangSmith platform. The demo area spans the entire front patio and serves as the central hub of the conference, a place to see what's new, try things out, and talk to the engineers who built them. Workshops. Go deeper with hands-on sessions led by LangChain engineers. Langchain'll cover topics like building Deep Agents, as well as improving agents using LangSmith Align Evals and Insights Agent. These are designed to be practical. Bring your laptop and leave with tactics you can actually use. Time with speakers. One of the best parts of last year was the hallway conversations. This year Langchain has built even more space for that. You'll be able to meet with many speakers after their talks at its Ask Me Anything booth. Get your ticket. Interrupt 2026. May 13-14. The Midway, San Francisco. Langchain sold out last year and expect to again.

Product Impact Podcast
Apr 9th, 2026
Anthropic is no longer a model company.

Anthropic is no longer a model company. Claude Managed Agents quietly redraws the competitive map for every AI infrastructure vendor. * | Anthropic launched Claude Managed Agents, shifting from selling model inference to selling a full agent platform. * | The move puts Anthropic in direct competition with AWS Bedrock, OpenAI Assistants, LangChain, and the agent framework ecosystem. * | Enterprise switching costs rise dramatically when agent state, tooling, and operations are locked into Anthropic's infrastructure. * | The unanswered question - who is liable when a managed agent causes real-world harm - will determine enterprise adoption velocity. Jessica Yan, a product lead at Anthropic, posted on LinkedIn yesterday to announce the public beta of Claude Managed Agents. It is worth reading carefully because it quietly describes the most consequential strategic shift at Anthropic in 2026. "You can now raise the ceiling of agent execution AND launch faster using our stateful APIs, performance-optimized harness, scalable infra, and rich developer tools." - Jessica Yan, Product at Anthropic Read those four capabilities in order. Stateful APIs. Performance-optimized harness. Scalable infrastructure. Rich developer tools. This is not a model release. This is not a feature expansion. This is Anthropic announcing that it is in the agent platform business - and by extension, in direct competition with AWS Bedrock, Google Vertex AI Agent Builder, OpenAI Assistants API, LangChain, LangGraph, CrewAI, Dust, and every other piece of infrastructure currently hosting agent workloads. Until Monday, building an agent on Claude meant you handled the infrastructure. Starting yesterday, Anthropic handles it for you. That is a business model change, not a feature launch. What actually shifted. Anthropic's business on Monday was selling inference. You bought API access to Claude, you handled state management, you wrote the orchestration, you built the monitoring, you scaled the infrastructure, you owned the developer experience. The margin was inference margin. The customer was anyone running a workload. Anthropic's business on Tuesday is selling a platform. You get Claude and the infrastructure to run Claude-powered agents in production. Anthropic captures more of the value chain. The margin is platform margin. The customer is the developer building agent products. Platform margins are higher than inference margins - that's the obvious part. The less-obvious part is stickiness. An enterprise that builds its agent on Claude Managed Agents cannot easily port that agent to a competing model. State, tooling, operational patterns, and incident history all get locked into Anthropic's infrastructure. Switching costs go up dramatically the moment a team's agent is running on Anthropic's harness. This is the move the cloud providers have been waiting to see. It's also the move they've been dreading. Who gets structurally worse this week. The hyperscaler Claude distribution path. AWS Bedrock and Google Vertex host Claude for enterprises that don't want to buy from Anthropic directly. Their value proposition is compliance, existing procurement relationships, and vendor consolidation. All three are real. None beat "the people who built Claude are also running your agent infrastructure for Claude." Every enterprise that was going to run Claude agents through Bedrock or Vertex now has a reason to evaluate going straight to Anthropic instead. The agent framework startups. LangChain, CrewAI, LangGraph, Dust, and a long list of others built their businesses on being the orchestration layer above multiple model providers. Their pitch was: don't lock into one LLM; build with its framework; switch models when you need to. That pitch just got harder. Anthropic can now offer deeper integration, better performance tuning, and direct first-party support for Claude-based agents than any third-party framework can match. The frameworks will reposition around multi-model interoperability. That's a harder sell than "we're the best way to build agents." OpenAI's Assistants API. OpenAI built Assistants to keep enterprise developers inside the OpenAI ecosystem. They will now have to respond to every Anthropic Managed Agents capability with an equivalent, while also fighting on ChatGPT Enterprise and the foundation model benchmark treadmill. OpenAI's response will come fast. It will also be reactive, not strategic. Who wins. Anthropic, obviously. They just expanded their addressable market from "developers buying model access" to "developers building agent products." That's a much larger number, at higher margins, with stickier customers. The subtler winner is any enterprise that was paralyzed on build-versus-buy for its agent infrastructure. Managed Agents doesn't eliminate the buy-side risk, but it gives risk-averse buyers a credible vendor-backed option they didn't have on Monday. Expect the number of enterprises that move from "planning an agent platform strategy" to "piloting Anthropic Managed Agents" over the next 90 days to be larger than most analysts expect. The question nobody in the coverage is asking. Here is what's missing from every take this week: when a Claude Managed Agent takes a real-world action that causes a real-world problem, who is responsible? The developer who wrote the agent? The enterprise that deployed it? Anthropic, whose platform is managing the state and executing the action? That question is not answered in Yan's LinkedIn post. It is probably not answered in Anthropic's initial documentation. It will be the first thing every enterprise general counsel asks before signing a contract, and it will be the single variable that determines whether Managed Agents gets enterprise traction or stays a developer tool. Anthropic has two ways to handle this. They can write a managed services agreement that places all liability on the customer. That's legally clean and will scare off exactly the enterprises most likely to pay platform prices. Or they can accept operational responsibility for the agents running on their platform. That solves the trust problem and fundamentally changes Anthropic's risk profile as a company. How Anthropic answers this question in their enterprise documentation over the next 30 days will tell you whether they see Managed Agents as a developer acquisition play or as a genuine enterprise platform. Those two paths lead to completely different outcomes in 2027. Three things to watch in the next 30 days. Pricing. Anthropic has not published pricing for Managed Agents yet. Usage-based pricing signals developer targeting. Platform fee plus usage signals enterprise targeting. Whichever they pick will reveal who they're actually selling to. Named reference customers. The first three enterprise reference customers Anthropic cites will tell you whether they have enterprise credibility for this move. Watch the Anthropic blog through mid-May. OpenAI's response. OpenAI will ship something comparable within 60 days. They have to. How fast they respond - and whether it's a feature match or a genuine platform strategy - will tell you how seriously they are taking this. Yesterday Anthropic was a model company with a platform ambition. Today they are a platform company with a model at the center. The difference matters more than most of the coverage this week will capture. About the author: Arpy Dragffy is founder of PH1 Research and co-host of the Product Impact Podcast. All claims about competitive positioning in this piece are based on public product documentation from the companies referenced. How helpful was this article? Founder, PH1 Research · Co-host, Product Impact Podcast Hosted by Arpy Dragffy and Brittany Hobbs. Arpy runs PH1 Research, a product adoption research firm, and leads AI Value Acceleration, enterprise AI consulting. Get AI product impact news weekly

B4N1
Apr 3rd, 2026
Create a DSPy pipeline.

Create a DSPy pipeline. 03 Apr 2026 READ TIME: 1 MIN Orchestrating AI-Driven DevOps Pipelines with Enhanced Compliance Automation Introduction In today's fast-paced digital landscape, organizations are under immense pressure to deliver high-quality software products quickly and efficiently while ensuring compliance with regulatory requirements. DevOps pipelines have become a crucial component in achieving this goal, but traditional approaches often fall short in providing the level of automation and compliance required. This is where AI-powered agents come into play, revolutionizing the way B4n1 approach DevOps pipeline orchestration and compliance automation. The Challenge of Traditional DevOps Pipelines Traditional DevOps pipelines rely on manual processes and scripting to automate tasks, which can lead to: * Inefficiency: Manual processes are prone to errors and can be time-consuming. * Inconsistency: Scripts can become outdated and may not account for changing infrastructure or compliance requirements. * Lack of Visibility: It can be difficult to track the status of pipeline execution and identify bottlenecks. Introducing Kiro AI-Powered Agents Kiro AI-powered agents are designed to address these challenges by providing a more efficient, consistent, and transparent approach to DevOps pipeline orchestration. These agents use machine learning algorithms to analyze infrastructure and compliance data, enabling them to make informed decisions and automate tasks. Integrating Kiro Agents with DSPy and LangChain To create an autonomous infrastructure vision and continuous compliance automation framework, B4n1 will integrate Kiro AI-powered agents with DSPy and LangChain. * DSPy: DSPy is a Python library for building and managing data pipelines. It provides a simple and efficient way to process and transform data, making it an ideal choice for integrating with Kiro agents. * LangChain: LangChain is a Python library for building conversational AI applications. It provides a range of tools and APIs for natural language processing, making it an excellent choice for integrating with Kiro agents. Practical Code Example Here is a simple code example that demonstrates how to integrate Kiro agents with DSPy and LangChain: import dspy from langchain import LLMChain from kiro import KiroAgent pipeline = dspy.Pipeline # Create a Kiro agent agent = KiroAgent # Define a LangChain LLMChain llm_chain = LLMChain( model_name="code-davinci-002", max_length=2048, temperature=0.7,) # Define a DSPy task task = dspy.Task( name="example_task", description="Example task", pipeline=pipeline,) # Define a Kiro agent task agent_task = agent.create_task( name="example_agent_task", description="Example agent task", pipeline=pipeline,) # Define a LangChain task lang_chain_task = llm_chain.create_task( name="example_lang_chain_task", description="Example LangChain task", pipeline=pipeline,) # Create a pipeline with all tasks pipeline.add_task(task) pipeline.add_task(agent_task) pipeline.add_task(lang_chain_task) # Run the pipeline pipeline.run This code example demonstrates how to create a DSPy pipeline, a Kiro agent, and a LangChain LLMChain. It then defines a DSPy task, a Kiro agent task, and a LangChain task, and adds them to the pipeline. Finally, it runs the pipeline. Benefits of Integration The integration of Kiro AI-powered agents with DSPy and LangChain provides several benefits, including: * Autonomous Infrastructure Vision: Kiro agents can analyze infrastructure data and make informed decisions about pipeline execution, reducing the need for manual intervention. * Continuous Compliance Automation: Kiro agents can analyze compliance data and automate tasks to ensure continuous compliance with regulatory requirements. * Improved Efficiency: The integration of Kiro agents with DSPy and LangChain enables more efficient pipeline execution and reduces the need for manual scripting. * Increased Visibility: The integration provides real-time visibility into pipeline execution and identifies bottlenecks, enabling organizations to make data-driven decisions. Top 5-10 Most Used Commands/Snippets Here are the top 5-10 most used commands/snippets for this technology: * dspy.Pipeline: Creates a new DSPy pipeline. * kiro.KiroAgent: Creates a new Kiro agent. * langchain.LLMChain: Creates a new LangChain LLMChain. * dspy.Task: Creates a new DSPy task. * kiro.KiroAgent.create_task: Creates a new Kiro agent task. * langchain.LLMChain.create_task: Creates a new LangChain task. * pipeline.add_task: Adds a task to the pipeline. * pipeline.run: Runs the pipeline. * kiro.KiroAgent.analyze: Analyzes infrastructure data. * langchain.LLMChain.generate: Generates text based on input. Visualization Here is a simple visualization of the pipeline using Chart.js: { "type": "bar", "data": {"labels": ["DSPy", "Kiro Agent", "LangChain"], "datasets": [ { "label": "Pipeline Execution Time", "data": [10, 20, 30], "backgroundColor": ["#FF6384", "#36A2EB", "#FFCE56"]}]}, "options": {"title": { "display": true, "text": "Pipeline Execution Time"}, "scales": {"yAxes": [ { "ticks": { "beginAtZero": true}}]}}} This visualization demonstrates the execution time of the pipeline for each task. Conclusion In conclusion, the integration of Kiro AI-powered agents with DSPy and LangChain provides a powerful framework for autonomous infrastructure vision and continuous compliance automation. By leveraging the strengths of each technology, organizations can create more efficient, consistent, and transparent DevOps pipelines that ensure continuous compliance with regulatory requirements. Have questions? Let's talk about how to apply these technical concepts to your workflow.

Business Wire
Mar 30th, 2026
DataCamp partners with LangChain to launch AI engineering learning track for developers

DataCamp has partnered with LangChain to launch an AI Engineering with LangChain learning track, targeting software developers building production-ready AI applications. The curriculum covers application development, evaluation, retrieval-augmented generation, tool use and agent-based systems using LangChain's framework for large language models. Designed for developers with Python experience, the track uses DataCamp's AI-native learning platform, featuring an AI Tutor that provides real-time feedback. The course incorporates LangSmith, LangGraph and LangChain tools to create an interactive, "learn by building" approach. DataCamp CEO Jo Cornelissen said AI engineering skills have become an urgent need for building reliable, production-ready systems. The track is available now on datacamp.com. DataCamp supports over 6,000 organisations and 18 million learners globally.

CSO Online
Mar 30th, 2026
LangChain path traversal bug adds to input validation woes in AI pipelines.

LangChain path traversal bug adds to input validation woes in AI pipelines. Mar 30, 2026 4 mins The path traversal flaw, allowing access to arbitrary files, adds to a growing set of input validation issues in AI pipelines. Security researchers are warning that applications using AI frameworks without proper safeguards can expose sensitive information in basic, yet critical, non-AI ways. According to a recent Cyera analysis, widely used AI orchestration tools, LangChain and LangGraph, are vulnerable to critical input validation flaws that could allow attackers to access sensitive enterprise data. In a recent blog post, the cybersecurity company outlined how a newly discovered flaw in LangChain, along with two similarly-themed previously reported ones, can be exploited to retrieve different categories of data, including local files, API keys, and stored application state. "The biggest threat to your enterprise AI data might not be as complex as you think," Cyera researchers said in the post. The issues often hide in the "invisible, foundational plumbing" that connects AI to business workflows, the researchers argued, adding that all the flaws are now fixed by the tools' maintainers but need to be applied immediately across integrations to avoid impact. Path Traversal becomes the latest in a series of input validation bugs Cyera disclosed a new path traversal vulnerability and analyzed it alongside two previously reported flaws, showing how each maps to specific components in LangChain and LangGraph and enables access to a different class of data. The path traversal issue, tracked as CVE-2026-34070, arises from how a LangChain feature resolves file paths when loading prompt templates or external resources. By supplying crafted input, an attacker can traverse directories and read arbitrary files from the host system, potentially exposing configuration files and credentials. The flaw received a severity rating of CVSS 7.5 out of 10. One of the older flaws, an unsafe deserialization flaw identified as CVE-2025-68664, stemmed from the handling of serialized objects within the LangChain framework. The issue lets an application process untrusted serialized data, allowing an attacker to inject malicious payloads interpreted as trusted objects, enabling access to sensitive runtime data such as API keys and environment variables. The flaw had received a critical 9.3/10.0 rating when it was disclosed in December 2025. The other older flaw, an SQL injection vulnerability in LangGraph's checkpointing mechanism, was found to allow manipulation of backend queries. Exploiting this flaw could grant access to stored application data, including conversation history and workflow state tied to AI agents. Tracked with the CVE ID CVE-2025-67644, the flaw was assigned a high-severity rating of CVSS 7.3 out of 10. Together, Cyera researchers pointed out, the three flaws (along with others of the kind) highlight how widely used AI frameworks can expose different layers of enterprise data, effectively turning LangChain and LangGraph into a new attack surface. Back to the basics. The exploit technique described in the report relies on insufficient input validation and unsafe handling of data across key integration points in AI pipelines. In each case, attacker-controlled input, whether through prompts, serialized payloads, or query parameters, can influence how the framework interacts with the filesystem or database. CSO smart answers. Explore related questions. For the most recent path traversal bug, the risk is driven by a lack of strict path validation and sandboxing. Mitigations include enforcing allowlists for file access and restricting directory boundaries. In the case of deserialization, the issue lies in treating external data as trusted. Cyera recommends avoiding unsafe deserialization methods and ensuring that only validated, expected data structures are processed. For SQL injection, the company recommended using parameterized queries and strengthening input sanitization. Across all three cases, the guidance aligned with established secure coding practices. From its editors straight to your inbox. Get started by entering your email address below. Senior Writer Shweta has been writing about enterprise technology since 2017, most recently reporting on cybersecurity for CSO online. She breaks down complex topics from ransomware to zero trust architecture for both experts and everyday readers. She has a postgraduate diploma in journalism from the Asian College of Journalism, and enjoys reading fiction, watching movies, and experimenting with new recipes when she's not busy decoding cyber threats.

INACTIVE