Full-Time

Cloud Solutions Architect-OCI

Posted on 12/9/2025

WekaIO

WekaIO

501-1,000 employees

AI-native data platform for high-performance workloads

No salary listed

Remote in USA

Remote

Category
Sales & Solution Engineering (1)
Required Skills
Machine Learning
Requirements
  • Deep Expertise in OCI services, ecosystem, and operations
  • Hands-on experience is a must
  • Strong general understanding of Cloud operating models
  • Experience in migrating workflows from On-premise to OCI
  • Background in the storage aspect of Cloud workflows and the challenges associated with moving large volumes of data into the public cloud
  • Experience with Hybrid operating models that leverage the public cloud for some of an overall workflow
  • Outstanding business acumen with the ability to translate business requirements into technical solutions
  • Proven ability to add positive value to consultative multi-skilled teams consisting of customers and partners
  • A determined customer service orientation, a strong sense of initiative, a positive attitude, and a collaborative working style
  • Able to handle a fast-paced environment and continuously re-prioritize while maintaining a constant focus on driving successful outcomes
  • Excellent verbal and written interpersonal skills
  • Experience with HPC Storage and/or Software Defined Enterprise NAS Platforms – an advantage
  • Prior experience in working at a start-up company - an advantage
Responsibilities
  • Serve as an OCI Subject Matter Expert for WEKA’s Product team
  • Be an advocate for WEKA customers, and help drive the future success of WEKA in OCI by driving roadmap activities, prioritizing features and deliverables
  • Work closely with with the R&D teams, and build prototypes and integrations within the OCI ecosystem based on WEKA’s products
  • Evangelize Neural Mesh by WEKA for OCI across the industry, including at conferences, and directly to prospects, customers, and the partner community
  • Promote OCI best practices and successful outcomes throughout the WEKA sales organization
  • Develop relationships with customers and partners to understand customer needs better and expand the reach of the WEKA OCI solutions
  • Collaborate with sales teams on strategies to help ensure the business value of WEKA in OCI is understood throughout the sales campaign and realized upon implementation
  • Participate in sales campaigns in a technical architect role, leading the consultative sales process, analyzing account needs, and helping to develop solutions that exceed their expectations
  • Actively participate and lead in Proof of Value engagements for WEKA in OCI to help ensure successful outcomes
  • Build and deliver technical product and architecture presentations and demos
  • Become an SME on WEKA and its ecosystem
  • Occasionally provide assistance to the WEKA Customer Success organization

WEKA provides an AI-native data platform that is cloud- and hardware-agnostic for performance-intensive AI, ML, and GPU workloads. The WEKA Data Platform brings data from on-premises, cloud, edge, and multicloud into dynamic pipelines to speed up data access and processing. It differentiates by offering a single scalable data platform across infrastructure footprints with emphasis on energy efficiency and high performance at scale. The goal is to replace data silos with unified, fast storage and data management to accelerate discoveries and insights.

Company Size

501-1,000

Company Stage

Series E

Total Funding

$465.1M

Headquarters

Campbell, California

Founded

2013

Simplify Jobs

Simplify's Take

What believers are saying

  • NVIDIA partnerships certify WEKApod Nitro for DGX SuperPOD, accelerating AI factory adoption.
  • 30% Fortune 50 adoption drives $1.6B valuation post-$140M Series E in May 2024.
  • BlueField-4 integration promises 100x tokens/watt efficiency for agentic AI by early 2026.

What critics are saying

  • NVIDIA DGX SuperPOD in-house storage locks customers into ecosystem, eroding WEKA share by 2027.
  • VAST Data FlashBox undercuts WEKApod economics, capturing exabyte inference market in 12 months.
  • Pure Storage Portworx poaches Fortune 50 clients with superior Kubernetes integration immediately.

What makes WekaIO unique

  • NeuralMesh™ storage system accelerates AI with microsecond latency and self-healing at exabyte scale.
  • WEKApod Prime delivers 65% better price-performance using AlloyFlash mixed-flash without throttling.
  • NeuralMesh AIDP deploys NVIDIA AI Data Platform in minutes for 6.5x more tokens per GPU.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

Life Insurance

401(k) Retirement Plan

401(k) Company Match

Unlimited Paid Time Off

Flexible Work Hours

Growth & Insights and Company News

Headcount

6 month growth

0%

1 year growth

0%

2 year growth

3%
StorageNewsletter
Mar 19th, 2026
Nvidia GTC 2026: Quobyte appoints storage industry veteran Andrew Perry as Vice president of Sales to drive global expansion.

Nvidia GTC 2026: Quobyte appoints storage industry veteran Andrew Perry as Vice president of Sales to drive global expansion. Andrew brings 20+ years of enterprise storage expertise and a proven track record of building world-class sales organizations and scaling high-performance teams through major industry shifts. Quobyte, a provider of high-performance storage, architected for AI, announced the appointment of Andrew Perry as vice president of Sales. Perry, a seasoned sales executive with over two decades of experience in the enterprise storage and data management sectors, will lead Quobyte's global sales strategy as the company enters its next phase of hyper-growth. Perry joins Quobyte from ScienceLogic, where he served as Vice president of Sales for the Americas, consistently driving record-breaking revenue and expanding the company's footprint across the enterprise and public sectors. His career also includes senior leadership roles at Weka, SpringPath and Violin Memory, where he was instrumental in scaling high-performance sales teams and navigating the transition from legacy hardware approaches to modern, cloud-native software architectures. "The storage market is at a massive inflection point. Organizations are moving away from the rigid, expensive hardware silos of the past toward the flexibility and performance of software-defined infrastructure architected for AI," said Björn Kolbeck, CEO and co-founder, Quobyte. "Andrew's track record of building world-class sales organizations and his deep understanding of the high-performance storage landscape make him the perfect leader to accelerate our momentum. His arrival marks a significant milestone as we challenge the incumbents and redefine what's possible in the AI era." Perry's appointment comes at a time of significant disruption in the storage industry. While competitors like Pure Storage, Weka, and Vast Data have highlighted the demand for flash and scale-out architectures, Quobyte's unique software-only, ultra-resilient scale out approach provides a level of hardware independence and operational simplicity that legacy-bound vendors cannot match. "Quobyte is the best-kept secret in the storage industry, and that is about to change," said Andrew Perry, VP, sales, Quobyte. "Having spent my career at the forefront of innovation, I've seen how difficult it is for enterprises to manage massive growth in unstructured data without sacrificing performance or breaking the budget. Quobyte has solved the 'impossible' trade-off between extreme scale and ease of management. I am thrilled to join this world-class team and help our customers build the data foundations they need for the next gen of AI and high-performance computing." At Quobyte, Perry will oversee all aspects of the global sales organization, including direct sales, channel partnerships, and strategic alliances. His immediate focus will be on expanding Quobyte's presence in key verticals such as Neoclouds, Financial Services, M&E, Life Sciences, and Research-sectors where Quobyte's ability to deliver linear scalability and 100% uptime is a critical competitive advantage. Read also: Improvements include ARM support, cloud object storage integration, and end-to-end observability for scalable AI infrastructure Supported processors include Nvidia Grace, Ampere CPUs, and AWS Graviton Formerly Penguin Computing, Toshiba, CiaraTech, Smart Storage, and Sanmina

The Associated Press
Mar 17th, 2026
Spectro Cloud and WEKA partner to simplify NVIDIA AI Data Platform deployment with one-click automation

Spectro Cloud and WEKA have partnered to simplify deployment of the NVIDIA AI Data Platform, a reference architecture integrating NVIDIA-accelerated computing, networking and AI-ready storage. The collaboration combines Spectro Cloud's PaletteAI platform with WEKA's NeuralMesh storage solution to streamline AI infrastructure deployment across data centres and edge locations. The integration enables one-click deployment of validated AI data platform stacks incorporating NVIDIA BlueField DPUs, NVIDIA Spectrum-X Ethernet networking and NVIDIA AI Enterprise software. WEKA's NeuralMesh delivers ultra-low-latency, high-throughput data access designed to keep GPUs continuously fed for training and inference. The solution, validated with platforms from companies like Supermicro, is available now through both companies' sales teams and authorised partners. WEKA is trusted by 30% of the Fortune 50.

The Straits Times
Mar 16th, 2026
WEKA Accelerates AI Factory Deployment Times From Months to Minutes with Turnkey NVIDIA AI Data Platform Solution

WEKA accelerates AI Factory deployment times from months to minutes with turnkey NVIDIA AI Data Platform solution. Published Mar 17, 2026, 06:00 AM New NeuralMesh AI Data Platform Closes the Gap Between AI Proof-of-Concept and Profitable Production, Delivering Scalable Business Intelligence and Faster AI Outcomes with NVIDIA SAN JOSE, Calif. and CAMPBELL, Calif., March 17, 2026 /PRNewswire/ - From GTC 2026: WEKA, the AI storage and memory systems company, today announced general availability of its enterprise-ready NeuralMesh(TM) AI Data Platform (AIDP), which delivers composable, high-performance infrastructure optimized for AI Factory deployments. Based on NVIDIA AI Data Platform reference design, the solution is an end-to-end system that accelerates the delivery of AI-ready data to AI factories. The result: AI project timelines speed up from months to minutes, empowering organizations to deliver production-scale agentic AI applications using best-in-class technologies across their ecosystem. WEKA and NVIDIA accelerate enterprise-ready AI factories Leveraging NeuralMesh's uniquely adaptive architecture, the solution addresses the most persistent obstacle in enterprise AI: organizations can demonstrate AI concepts work in proof-of-concept (POC) but consistently struggle to reach production scale. Built on more than 170 patents and over a decade of AI-native storage innovation, a foundation no competing storage platform can replicate, NeuralMesh is the only solution that gets faster and more resilient as AI environments scale to exabytes and beyond. As AI Factory data infrastructure becomes a critical layer in enterprise AI architecture, NeuralMesh is helping customers to close the gap between POC and production deployments today. Customers running NeuralMesh with Augmented Memory Grid(TM) can achieve 6.5x more tokens per GPU for inference workloads, reflecting the compounding advantage of a purpose-built architecture over retrofitted infrastructure. "Enterprises are now deploying AI Factories internally, driving a major shift to inference throughout the ecosystem. These companies require rapid AI outcomes and need turnkey solutions that come with the enterprise table-stakes of reliability, security, and optimal price-performance and cost-effectiveness," said Liran Zvibel, cofounder & CEO at WEKA. "WEKA's NeuralMesh AIDP gives organizations everything they need to run always-on AI factories: extreme storage performance and the flexible architecture required to operationalize AI at production scale. Whether an organization is just beginning its AI journey or running full-stack NVIDIA deployments, NeuralMesh AIDP scales seamlessly as they grow." "The deployment of agentic AI in production demands a new focus on managing the continuous, coherent flow of data and inference context," said Jason Hardy, vice president, storage technologies at NVIDIA. "By leveraging the NVIDIA AI Data Platform, solutions like WEKA's NeuralMesh AIDP deliver the persistent context tier necessary for stable and high-scale agentic inference." One System, Every AI Workload: Delivering End-to-End AI Factories AI factories provide enterprises with purpose-built production systems designed to operate AI at scale, but they demand storage capabilities that extend beyond where data sits to actively support context and continuous data movement. NeuralMesh, WEKA's intelligent, adaptive storage system, delivers the continuous data-loop performance that AI factory workloads demand. Out-of-the-Box AI Applications Designed to Accelerate Business Outcomes NeuralMesh AIDP enables enterprises and AI cloud providers to unify AI operations from retrieval to inference on a single, ready-to-deploy platform. With pre-integrated hardware and software options from NVIDIA (including NVIDIA RTX 6000 PRO Server Edition GPUs and the newly announced NVIDIA RTX 4500 PRO Server Edition GPUs) alongside Red Hat, Spectro Cloud and Supermicro, organizations can eliminate months of AI integration work. The platform provides a simplified solution that allows teams to focus on intelligence output rather than managing underlying infrastructure. It delivers ready-to-use pipelines for a spectrum of business use cases that work across verticals, including: Semantic Search, Video Search & Summarization (VSS), AlphaFold for drug discovery, AIQ/Agentic RAG and more. These AI applications are already being used by enterprise and research customers to drive outcomes across high-priority sectors: * Health & Life Sciences: Identify patient subgroups across multiple studies and accelerate discovery in data-intensive workflows such as cryo-EM. * Financial Services: Get early market signal detection as data lands and institutionalize knowledge access into a shared, secure resource. * Public Sector: Detect potential threats based on context and meaning, not keywords, and automate evidence synthesis across sources to improve decision-making cycles. * Physical AI & Robotics: Shorten the loop from real-world data capture to retrained model deployment, improving fleet performance, reliability, and time to market. "The missing piece in production AI isn't reasoning models or compute power. It's having an efficient platform that unifies the AI Factory pipeline and makes it truly scalable," said Shimon Ben-David, CTO at WEKA. "The NeuralMesh AIDP was designed to close AI's production and profitability gap, taking enterprise experiments to full-scale operations and making AI economically viable for everything from next-generation agents to healthcare applications." Supporting Partner & Customer Quotes "Getting AI to production requires more than technology - it requires consistency and control. By using the NeuralMesh AI Data Platform with Red Hat AI Enterprise, based on Red Hat OpenShift, organizations can run data-intensive AI pipelines across on-premises and cloud environments at the scale that enterprise production demands, without sacrificing governance or security," said Ryan King, vice president, AI and Infrastructure Partners at Red Hat. "The real challenge in AI is no longer training models. It is running them reliably in production, at scale, with predictable performance and cost. That's where most AI initiatives stall. The NeuralMesh AI Data Platform integrates with our AI Acceleration Cloud, Neysa Velocis, to solve that problem directly. It gives teams a way to run AI workloads as dependable systems, without carrying the operational burden of stitching together complex infrastructure," said Anindya Das, cofounder and CTO at Neysa. Availability The NeuralMesh AI Data Platform solution is available now, delivered as an appliance-style system. Organizations can learn more at weka.io/nvidia or visit WEKA at GTC 2026, booth #1034 for a demo. About WEKA WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh(TM) by WEKA(R), its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, dynamically adapting to AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize GPUs, scale AI faster, and reduce innovation costs. Learn more at www.weka.io or connect with us on LinkedIn and X. WEKA and the W logo are registered trademarks of WekaIO, Inc. Other trade names herein may be trademarks of their respective owners. WEKA: The Foundation for Enterprise AI The issuer is solely responsible for the content of this announcement.

Spectro Cloud
Mar 16th, 2026
Spectro Cloud and WEKA partner to bring data closer to AI workloads, accelerating time to enterprise value.

Spectro Cloud and WEKA partner to bring data closer to AI workloads, accelerating time to enterprise value. New integration pairs PaletteAI(TM) one-click deployment and full lifecycle management with NeuralMesh(TM) by WEKA(R) to streamline AI Data Platform deployments across data center and edge, in collaboration with NVIDIA SAN JOSE and CAMPBELL, Calif. - March 16, 2026 - Ahead of NVIDIA GTC, Spectro Cloud and WEKA announced a partnership to simplify and accelerate the deployment of the NVIDIA AI Data Platform, a next-generation reference architecture that integrates NVIDIA-accelerated computing, networking, and AI-ready storage to deliver high-throughput, low-latency data pipelines for AI workloads. The collaboration combines Spectro Cloud's PaletteAI(TM) platform for automated, secure AI infrastructure with WEKA's NeuralMesh(TM) intelligent, adaptive mesh storage solution to make it dramatically easier for enterprises to deploy AI data platform-aligned environments at scale - turning the NVIDIA reference design r for the AI Factory into an operational reality. "AI should deliver business impact, not infrastructure complexity," said Tenry Fu, CEO and co-founder, Spectro Cloud. "Partnering with WEKA lets us pair PaletteAI's orchestration with an AI-native data platform inside the NVIDIA AI Data Platform reference design, giving enterprises a faster, safer path to production." AI Data Platform: the blueprint for the AI Factory. The NVIDIA AI Data Platform defines how to tightly integrate compute, networking, and storage so GPUs are never starved of data, unlocking near-real-time insights and improving AI agent accuracy. The reference design leverages NVIDIA BlueField DPUs for accelerated networking, storage, and security offload. It integrates with NVIDIA Spectrum-X Ethernet networking for predictable, lossless east-west traffic, and NVIDIA AI Enterprise software, including NVIDIA NIM and NVIDIA NeMo microservices to power inference and model operations at enterprise scale. Today's announcement operationalizes that architecture with turnkey integration, automated deployment, and AI-native data performance from Spectro Cloud and WEKA. Highlights include: One-click AI Data Platform deployment with PaletteAI. PaletteAI uses a declarative, cloud-native approach to provision and configure AI data platform-aligned stacks end to end. Customers can now deploy a validated, end-to-end AI data platform stack that incorporates NVIDIA BlueField DPUs, NVIDIA Spectrum-X Ethernet networking, and NVIDIA AI Enterprise software, with configuration and lifecycle automation handled by PaletteAI. NeuralMesh(TM) performance for AI pipelines. NeuralMesh by WEKA delivers the ultra-low-latency, high-throughput data access required to keep GPUs continuously fed for both training and inference. Unlike traditional storage that slows as workloads grow, WEKA's NeuralMesh architecture becomes faster and more resilient at scale. It powers high-throughput pipelines for RAG, vector search, multimodal ingestion, distributed training, and long-context inference, ensuring consistent GPU utilization and exascale performance. Built on NVIDIA AI Enterprise. PaletteAI and WEKA align with NVIDIA AI Enterprise to ensure validated interoperability with NVIDIA NIM and NeMo microservices, providing a secure, high-performance foundation from pilot to production. Operational efficiency at scale. PaletteAI separates platform guardrails from practitioner agility, enabling governed self-service environments, policy-based networking, and day-2 operations across hybrid, multicloud and edge locations. Combined with WEKA's intelligent monitoring and self-healing capabilities, organizations can operate AI infrastructure at massive scale without adding operational complexity. "The NVIDIA AI Data Platform represents the future of enterprise AI infrastructure, and WEKA is proud to be one of its foundational technology partners," said Nilesh Patel, Chief Strategy Officer, WEKA. "Together with Spectro Cloud, we're transforming the AI data platform from a reference design into a living system - one that can be deployed with a click, operated at global scale, and tuned for the microsecond latency and extreme throughput that modern agentic AI and reasoning workloads demand." Available now. The Spectro Cloud x WEKA AI Data Platform Reference Architecture with NVIDIA - validated with leading OEM platforms such as Supermicro - is available to joint customers today through both companies' sales teams and authorized partners. To learn more about PaletteAI, visit the Spectro Cloud site at spectrocloud.com. For information about NeuralMesh by WEKA, visit weka.io. About Spectro Cloud. With its Palette and PaletteAI platforms, Spectro Cloud solves how enterprises and public sector organizations manage full-stack application and AI infrastructure in any environment: from edge to cloud, and from metal to model. Using the power of cloud-native technologies like Kubernetes, Spectro Cloud, Inc. give platform engineers and operations teams flexibility to choose their perfect stack, while benefiting from complete repeatable consistency. Spectro Cloud, Inc. automate the full lifecycle of complex infrastructure at scale, for massive cost savings and better business outcomes. Learn more at spectrocloud.com. About WEKA. WEKA is transforming how organizations build, run, and scale AI workflows with NeuralMesh(TM) by WEKA(R), its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes slower and more fragile as workloads expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, growing dynamically with AI environments to provide a flexible foundation for enterprise AI and agentic AI innovation. Trusted by 30% of the Fortune 50, NeuralMesh helps leading enterprises, AI cloud providers, and AI builders optimize their GPUs, scale AI faster, and lower their innovation costs. Learn more at www.weka.io or connect with Spectro Cloud, Inc. on LinkedIn and X.

Technology AI Insights
Nov 19th, 2025
WEKA Launches Next-Gen WEKApod Appliances to Transform AI Storage Economics

WEKA launches next-gen WEKApod appliances to transform AI storage economics. WEKA, a leader in AI storage technology, has unveiled the next generation of its WEKApod appliances, aiming to reshape long-standing performance-versus-cost compromises in modern AI infrastructure. With this launch, the company positions itself to meet the rapidly expanding demands of AI and high-performance computing (HPC), while offering organizations a more efficient path to scale their data operations. To begin with, WEKA introduced WEKApod Prime, a completely redesigned appliance engineered to achieve 65% better price-performance. It accomplishes this by intelligently distributing data across mixed flash configurations, delivering strong economics without forcing customers to sacrifice performance. In parallel, the company rolled out WEKApod Nitro, which doubles performance density through next-generation hardware. This enhancement enables organizations to accelerate AI and HPC innovation, maximize GPU efficiency, and support larger customer bases. Moreover, its higher-density architecture makes it an excellent fit for large-scale object storage environments and AI data lakes that require uncompromised speed. Furthermore, WEKApod appliances remain the fastest and simplest way to deploy NeuralMesh by WEKA, the world's only storage system purpose-built for large-scale AI acceleration. These appliances offer pre-validated, ready-to-deploy configurations and feature an improved plug-and-play setup experience. Companies can begin with as few as eight servers and expand to hundreds, avoiding complex integration work while gaining full access to distributed data protection, automated tiering, instant snapshots, encryption, hybrid cloud features, and multi-protocol access. Addressing the infrastructure efficiency crisis. Enterprises investing in AI infrastructure increasingly struggle to demonstrate ROI due to underutilized GPUs, escalating inference costs, prolonged training cycles, and soaring cloud expenses. Legacy storage systems often force organizations to choose between performance and affordability an approach that no longer aligns with AI's rapidly evolving requirements. Additionally, power, space, and cooling limitations in datacenters intensify the pressure to squeeze more capability into every rack unit. WEKA's next-generation WEKApod lineup directly confronts these constraints. WEKApod Prime eliminates the common performance-cost trade-off by optimizing data placement based on workload characteristics. This ensures full write performance while achieving breakthrough economic efficiency. Breaking the performance-cost barrier. The WEKApod Prime leverages a unique mixed-flash design that combines TLC and eTLC flash drives within highly dense 1U or 2U configurations. Unlike traditional tiered storage systems that introduce caching layers and write penalties, WEKA's AlloyFlash technology maintains consistent, throttle-free performance. Notably, early adopters like the Danish Centre for AI Innovation (DCAI) are already benefiting from these advancements. The appliance also delivers substantial infrastructure improvements, including 4.6x better capacity density, 5x higher write IOPS per rack unit, and 68% lower power consumption per terabyte. As a result, AI workloads particularly write-intensive tasks like checkpointing run without bottlenecks that would otherwise idle costly GPUs. Meanwhile, WEKApod Nitro supports AI factories operating at extreme scale. With upgraded hardware such as the NVIDIA ConnectX-8 SuperNIC offering 800 Gb/s throughput, Nitro delivers twice the performance and 60% better price-performance. Its turnkey certification with NVIDIA DGX SuperPOD and NVIDIA Cloud Partner (NCP) programs helps teams deploy solutions in days instead of months. Industry impact and customer benefits. AI cloud providers, enterprises, and researchers stand to gain significant operational and financial advantages. Providers can improve margins and onboard customers faster, while enterprises can reduce power consumption by up to 68% and avoid major datacenter expansions. Researchers benefit from faster iteration cycles and GPU utilization rates exceeding 90%. Industry leaders are already recognizing these impacts: "Space and power are the new limits of innovation in data centres. WEKApod's exceptional storage performance density allows us to deliver hyperscaler-level data throughput and efficiency within an optimised footprint unlocking more AI capability per kilowatt and square metre," said Nadia Carlsten, CEO, Danish Centre for AI Innovation (DCAI). "This efficiency directly improves economics and accelerates how we bring AI innovation to our customers." "AI investments must demonstrate ROI. WEKApod Prime delivers 65% better price-performance without compromising on speed, while WEKApod Nitro doubles performance to maximize GPU utilization. The result: faster model development, higher inference throughput, and better returns on compute investments that directly impact profitability and time-to-market," said Ajay Singh, Chief Product Officer at WEKA. "Networking is essential to AI infrastructure, transforming AI compute and storage into a thinking platform that generates and delivers tokens of digital intelligence at scale," said Kevin Deierling, senior vice president of Networking at NVIDIA. "With NVIDIA Spectrum-X Ethernet and NVIDIA ConnectX-8 networking at the foundation of WEKApod, WEKA is helping enterprises eliminate data bottlenecks which is critical to optimize AI performance."

INACTIVE