Software Engineer
Perception
Confirmed live in the last 24 hours
Autonomous flight technology for efficient air cargo services
Company Overview
Xwing is a pioneering company in the field of autonomous flight technology, with a focus on transforming air mobility to be safer, more efficient, and sustainable. Their Superpilot technology allows for remotely-operated flights, with real-time sensors and data collection enabling continuous improvement and adaptation to real-world scenarios. Recognized by Fast Company and TIME Magazine for their groundbreaking work, Xwing is also committed to accessibility, utilizing regional airports to provide reliable and affordable air delivery services to communities typically overlooked by major cargo routes.
AI & Machine Learning
Robotics & Automation
B2B
Company Stage
N/A
Total Funding
$54M
Founded
2016
Headquarters
San Francisco, California
Growth & Insights
Headcount
6 month growth
↓ -4%1 year growth
↓ -17%2 year growth
↑ 73%Locations
Concord, CA, USA • San Francisco, CA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Agile
AWS
Computer Vision
CUDA
Data Structures & Algorithms
Development Operations (DevOps)
Docker
Google Cloud Platform
Pytorch
Tensorflow
Kubernetes
Python
FPGA
Software Testing
CategoriesNew
AI & Machine Learning
Software Engineering
Requirements
- 2+ years of experience in autonomous vehicles with robotics perception, machine learning, computer vision, lidar processing or other related field
- Master's degree in Computer Science, Machine Learning, Robotics, or a related field
- Proven experience of software development in the autonomous vehicle industry, with a strong focus on robotics perception algorithms and machine learning
- Strong problem-solving skills and the ability to optimize perception algorithms performance
- Proficiency in a prototyping programming language such as Python
- Proficiency in a compiled programming language such as C/C++
- Familiarity with common robotics perception libraries and tools such as:
- Deep learning frameworks (e.g., TensorFlow, PyTorch)
- Computer vision libraries (e.g., OpenCV)
- Lidar processing (e.g., PCL, open3d)
- Middleware (e.g., ROS, protobuf)
- Knowledge of algorithm performance profiling and optimization techniques
- Excellent communication and collaboration skills
- Demonstrated ability to contribute timely deliverables in a fast-paced, agile development environment
Responsibilities
- Develop and evaluate robotics perception algorithms including deep learning, machine learning, computer vision, and so forth, on our unique autonomous flight datasets (camera, lidar)
- Set up and maintain scalable dataset, metrics evaluation, and visualization pipelines for your perception algorithms
- Investigate and implement algorithm improvements to exceed safety metrics requirements
- Rigorously optimize, deploy, and benchmark the algorithms on a variety of compute hardware such as CPU, GPU, and possibly FPGA. This may include:
- Optimize machine learning models for parallel processing (GPUs, hardware accelerators)
- Integrate machine learning models with inference engines and runtime libraries that are optimized for specific hardware platforms (e.g. Apache TVM, ONNX Runtime, TensorRT, TensorFlow Serving)
- Write and optimize GPU-specific code, often using libraries such as OpenCL, CUDA (for NVIDIA GPUs), or ROCm (for AMD GPUs) to accelerate model inference
- Investigate and apply techniques for model compression and quantization to reduce memory and compute requirements while maintaining model performance
- Implement unit tests, integration tests, and end-to-end tests for perception components
- Contribute intuitive, readable, scalable, and modular code to the team repositories
- Create and maintain documentation for the key perception components you own
- Work closely with other teams (systems, software, navigation, flight test, etc.) to ensure seamless integration of perception algorithms into testing and production environments
- Contribute to critical team tooling, processes and best practices
Desired Qualifications
- 5+ years of experience in autonomous vehicles with robotics perception, machine learning, computer vision, lidar processing or other related field
- PhD in Computer Science, Machine Learning, Robotics, or a related field
- Familiarity with GPU or FPGA acceleration for machine learning
- Familiarity with model compilation techniques such as model quantization, pruning, and kernel optimization
- Experience with containerization and deployment technologies (e.g., Docker, Kubernetes)
- Experience with edge computing and deploying models on resource-constrained devices
- Experience developing software running under an RTOS or hypervisor on resource-constrained hardware architectures
- Experience with cloud-based ML services (e.g. AWS, GCP)
- Experience setting up DevOps/MLOps pipelines