Full-Time

Customer Success Engineer II

Posted on 9/20/2025

Solo.io

Solo.io

201-500 employees

Cloud-native API management and service mesh

No salary listed

Remote in India

Remote

Remote role for applicants based in India

Category
Sales & Account Management (1)
Sales & Solution Engineering (1)
Required Skills
Kubernetes
JavaScript
Node.js
Java
Go
Requirements
  • At least two years of hands-on experience with Kubernetes
  • Working knowledge of Go, Java, Node.js or related, modern programming language
  • Ability to dig into code to troubleshoot
  • Strong written and verbal communication skills
Responsibilities
  • Provide daily expert guidance to existing customers
  • Develop strong relationships with customers, sales, and product teams
  • Proactively guide customers through their architectural and product setup and decisions
  • Troubleshoot and solve complex situations collaborating with Developers and Field Engineers
  • Contribute to Solo.io projects by submitting issues, pull requests, and documentation
  • Stay current with fast-moving additions to Solo.io products and related technologies to retain domain expertise and credibility
  • Manage stressful situations with calm professionalism, empathy, and compassion
  • Actively engage with the Solo.io community and related open source projects to foster adoption and grow external contributions

Solo.io builds cloud-native network tooling for API management and service mesh. Its products include Gloo Gateway, a Kubernetes-native gateway that routes API traffic and enforces security; Gloo Mesh, a service mesh that provides lifecycle management and telemetry using Istio and Cilium; and the Spotlight Developer Platform, a secure internal platform with multi-cluster support and plugins. The company monetizes through subscriptions, consulting, and partnerships with technology and cloud providers, including resellers. Its goal is to help organizations accelerate digital transformation by making API management and cloud-native networking easier, safer, and observable across multi-cluster environments.

Company Size

201-500

Company Stage

Series C

Total Funding

$171.5M

Headquarters

Cambridge, Massachusetts

Founded

2017

Simplify Jobs

Simplify's Take

What believers are saying

  • Agentevals fills agentic AI evaluation gap, boosting Gloo Platform adoption.
  • Four-layer AI stack—kagent, agentgateway, agentregistry, agentevals—creates ecosystem moat.
  • $171.5M funding including Series C from Altimeter fuels AI infrastructure expansion.

What critics are saying

  • Istio overtakes Gloo Mesh with Google enhancements in 12-24 months.
  • Kong Kuma captures Fortune 2000 customers via multi-cloud support in 6-12 months.
  • CNCF agentregistry forks integrate Traefik, bypassing Gloo lock-in in 18-24 months.

What makes Solo.io unique

  • Gloo Platform combines API management with service mesh using Istio and Cilium.
  • Agentevals benchmarks agentic AI reliability via OpenTelemetry at KubeCon Europe.
  • Agentregistry donation to CNCF standardizes AI agent governance with Kubernetes.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Remote Work Options

Flexible Work Hours

Growth & Insights and Company News

Headcount

6 month growth

-1%

1 year growth

0%

2 year growth

-1%
The New Stack
Mar 28th, 2026
Solo.io launches agentevals to solve agentic AI's "biggest unsolved problem"

Solo.io launches agentevals to solve agentic AI's "biggest unsolved problem" Solo.io launches agentevals, an open-source framework for evaluating agentic AI systems, announced at KubeCon Europe alongside a CNCF agent registry donation. So many agents, so little time to evaluate them. Solo.io's new projects can help. Agentic AI has blown up. These tools have become hotter than hot. But, there's this little problem. How do you evaluate them? Solo.io, best known for its cloud-native networking and API gateway platform, Gloo, has launched a new open-source initiative called agentevals. It's designed to help developers evaluate and benchmark "agentic AI" systems. Solo.io announced the project at KubeCon Europe in Amsterdam. According to Solo.io founder and CEO Idit Levine, autonomous AI systems pose new challenges for cloud operations. "Enterprises are experimenting with AI copilots and infrastructure agents, but they lack visibility into how these systems behave when given open-ended goals. gentBench helps teams understand not only what the models can do, but where their reasoning breaks down," Levine tells The New Stack. Levine continues, "Evaluation is the biggest unsolved problem in agentic infrastructure today. Organizations have frameworks for building agents, gateways for connecting them, and registries for governing them, but no consistent way to know whether an agent is actually reliable enough to trust in production." Aye, there's the rub. Agentevals provides a framework for testing the effectiveness of AI agents in real-world workflows, such as infrastructure automation, API orchestration, and service management. The goal is to give enterprise teams a standardized way to measure the reliability, latency, and success rates of autonomous agents before deploying them in production. "Evaluation is the biggest unsolved problem in agentic infrastructure today. Organizations have frameworks for building agents, gateways for connecting them, and registries for governing them, but no consistent way to know whether an agent is actually reliable enough to trust in production." The framework integrates with Solo.io's Gloo Platform and Envoy Proxy. This enables you to simulate multi-step tasks, such as configuring microservices, updating routing policies, or troubleshooting Kubernetes clusters under controlled conditions. Each run generates reproducible logs, metrics, and outcome data that can be used to compare different AI backends or agent architectures. The company claims that agentevals is the first benchmark designed to evaluate LLM-as-Agent across a diverse spectrum of different environments. To do this, the program relies on OpenTelemetry. "Whether you're using commercial APIs or open LLMs like Llama 3, you need transparent metrics for decision-making... We want agentevals to become a common reference point for the AI operations community." In addition, Solo says the open-source project is part of a broader effort to make AI-driven operations auditable and trustworthy. Levine says, "Whether you're using commercial APIs or open LLMs like Llama 3, you need transparent metrics for decision-making." We want agentevals to become a common reference point for the AI operations community." Agentevals is available on GitHub under the Apache 2.0 license. Solo.io plans to collaborate with other cloud-native vendors and AI research groups to expand the test library and integrate with common ML evaluation tools. In addition, Solo.io donated its agentregistry an AI-native open source registry for AI agents, MCP tools, and Agent Skills to the Cloud Native Computing Foundation (CNCF). This program enables you to standardize how AI capabilities are catalogued, discovered, and governed across the enterprise. As everyone and their uncle swiftly moves to Agentic computing, I expect both programs will find many fans.

Techstrong Group
Mar 25th, 2026
Solo.io launches agentevals open source project, contributes agentregistry to CNCF.

Solo.io launches agentevals open source project, contributes agentregistry to CNCF. AMSTERDAM - Solo.io announced the launch of agentevals, a new open source project for evaluating and benchmarking agentic AI behavior, and the contribution of its agentregistry project to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon Europe 2026. The company said the two initiatives address gaps in production reliability and governance for agentic AI workloads. Solo.io said agentevals uses OpenTelemetry to capture and correlate individual invocations from distributed agentic interactions, then scores them against golden evaluation sets using an extensible evaluation engine. The project supports offline and online evaluation modes, ships with built-in evaluators for trajectory matching and LLM-as-judge scoring, and includes a CLI, web interface and Model Context Protocol server. The company said the tool works with any model and framework that emits OpenTelemetry spans, with no requirement for agent reruns. "Evaluation is the biggest unsolved problem in agentic infrastructure today," said Idit Levine, founder and CEO of Solo.io. "Organizations have frameworks for building agents, gateways for connecting them, and registries for governing them, but no consistent way to know whether an agent is actually reliable enough to trust in production." The agentregistry project, originally introduced by Solo.io in November 2025, provides a centralized registry where AI agents, MCP tools and agent skills are catalogued, discovered and governed. Solo.io said the contribution to CNCF governance will enable community growth alongside kagent, a CNCF sandbox project for running AI agents in Kubernetes, and agentgateway, which is housed in the Linux Foundation. The registry integrates with Kubernetes, AWS AgentCore and Google Vertex AI for deployment, and includes runtime discovery to detect agents deployed outside governed workflows.

The Associated Press
Mar 25th, 2026
Solo.io launches agentevals open source project for agentic AI evaluation and reliability

Solo.io has launched agentevals, an open source project for evaluating and benchmarking agentic AI behaviour across any model or framework. The company also contributed its agentregistry project to the Cloud Native Computing Foundation to address governance gaps in agentic infrastructure. Agentevals leverages OpenTelemetry to capture distributed agentic interactions and scores them against evaluation sets using an extensible engine. It offers offline and online evaluation modes, zero-code integration, built-in evaluators and a community registry for custom scoring logic. Agentregistry provides a centralised registry for AI agents, MCP tools and agent skills, enabling standardised cataloguing and governance. It integrates with platforms including Kubernetes, AWS AgentCore and Google Vertex AI. Solo.io now offers four open source AI infrastructure layers: kagent framework, agentgateway, agentregistry and agentevals.

DEVOPSdigest
Apr 24th, 2025
Solo.io Launches Agent Gateway and Introduces Agent Mesh

Solo.io launches Agent Gateway and introduces Agent Mesh.

Dolphin Publications
Apr 3rd, 2025
Solo.io introduces MCP Gateway to smoothen AI agent integration

Solo.io is launching MCP Gateway within kgateway for this purpose.

INACTIVE