Computer Vision Engineer
Confirmed live in the last 24 hours
Kargo

51-200 employees

Automated logistics technology for real-time freight tracking
Company Overview
Kargo is a logistics technology company that is redefining the industry by integrating freight and data in real-time, using a machine vision system to optimize shipping and receiving at every loading dock. The company's automated data capture system eliminates manual effort, providing real-time updates to existing inventory systems and offering unparalleled visibility into the logistics process. With its ability to flag discrepancies, verify shipments, and provide actionable data, Kargo empowers businesses to make informed decisions, streamline operations, and maintain strong customer relationships.
Industrial & Manufacturing
Data & Analytics
Hardware
B2B

Company Stage

Series A

Total Funding

$33M

Founded

2019

Headquarters

San Francisco, California

Growth & Insights
Headcount

6 month growth

6%

1 year growth

-4%

2 year growth

144%
Locations
San Francisco, CA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Python
Data Structures & Algorithms
Pytorch
Java
Computer Vision
CategoriesNew
AI & Machine Learning
Software Engineering
Requirements
  • 5+ years experience as a Computer Vision Engineer, Machine Learning Engineer, or similar role
  • Experience in Python, PyTorch (Java, C++ are pluses)
  • Experience building and deploying large-scale machine learning models with feedback loops for continuous improvement
  • Experience building performant, distributed training and inference pipelines on very large datasets
  • Comfortable with full-stack / backend development code to build a strong understanding of underlying data structures and other dependencies
  • Experience building, and optimizing CV models or algorithms on the edge
  • Degree in Computer Science, Math, Statistics, Engineering, or a related quantitative field, or equivalent experience
Responsibilities
  • Build CV/ML algorithms for various tasks around vision, including training and optimizing CV/ML models
  • Build the backend or edge infrastructure to scale our training and inference workload, including training pipelines, evaluation, and model deployment
  • Design and collect datasets, and train models. Convert data to models and drive specs for data