Data Engineer
Posted on 11/30/2023
INACTIVE
CoreWeave

201-500 employees

Specialized cloud provider offering high-performance GPU compute resources
Company Overview
CoreWeave is a specialized cloud provider that offers a broad range of high-performance GPU compute resources, making it a leader in the industry for compute-intensive tasks such as VFX and rendering, machine learning, and AI. As an NVIDIA Elite Cloud Solutions Provider, the company provides reliable, on-demand access to GPU resources, which has resulted in significant cost savings and performance improvements for its clients. CoreWeave's commitment to delivering world-class results and its ability to quickly and easily scale resources makes it an ideal workplace for those seeking to work at the forefront of cloud computing technology.
AI & Machine Learning
Data & Analytics
Hardware
B2B

Company Stage

N/A

Total Funding

$2.8B

Founded

2017

Headquarters

New York, New York

Growth & Insights
Headcount

6 month growth

80%

1 year growth

296%

2 year growth

737%
Locations
Livingston, NJ, USA • Philadelphia, PA, USA • Brooklyn, NY, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Data Analysis
Google Cloud Platform
Hadoop
Java
Airflow
Microsoft Azure
SQL
Python
NoSQL
CategoriesNew
Data & Analytics
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field (Master's preferred)
  • Proven experience as a Data Engineer or similar role in a fast-paced environment
  • Proficiency in programming languages such as Python, Java, or Scala
  • Strong SQL skills for data manipulation and querying
  • Experience with data pipeline orchestration tools (e.g., Apache Airflow) and big data technologies (e.g., Hadoop, Spark)
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and related data services
  • Familiarity with database systems (e.g., SQL, NoSQL) and data warehousing concepts
  • Excellent problem-solving and communication skills
  • Strong attention to detail and a commitment to data quality
  • Ability to work independently and collaborate effectively in a team
Responsibilities
  • Design, develop and maintain robust and scalable data pipelines to collect, process and store data from various sources, including APIs, databases, and third-party services
  • Create and optimize data models to support analytics and reporting, ensuring data accuracy, consistency and performance
  • Collaborate with cross-functional teams to integrate data into applications and analytics platform to visualize performance to metrics and opportunities for improvement
  • Implement data security best practices to protect sensitive information and comply with data privacy regulations
  • Maintain comprehensive documentation of data pipelines, data models, and configurations for knowledge sharing and troubleshooting