Senior Software Engineer
Updated on 11/9/2023
Platform providing real-time business data and insights
Company Overview
ZoomInfo, a leading go-to-market platform, offers businesses a competitive edge by providing accurate, real-time data and insights to over 35,000 companies globally, thereby enhancing efficiency and aligning sales and marketing teams. The company stands out for its strict adherence to data privacy, boasting industry-leading GDPR and CCPA compliance and multiple data security and privacy certifications. Furthermore, ZoomInfo's all-in-one platform enables businesses to identify and engage potential customers before competitors, uniting sales and marketing teams around a single source of truth and facilitating rapid scaling through task automation across all outreach channels.
Data & Analytics
Company Stage
Series A
Total Funding
$7M
Founded
2007
Headquarters
Vancouver, Washington
Growth & Insights
Headcount
6 month growth
↑ 0%1 year growth
↑ 6%2 year growth
↑ 48%Locations
Vancouver, WA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Data Science
Hadoop
Git
Airflow
Scala
Snowflake
SQL
CategoriesNew
Software Engineering
Requirements
- Degree (Masters preferred) in Computer Science, Information Systems, Data Science, or related field and 7+ years of experience in data engineering, or an equivalent combination of education and experience.
- Expert and Hands-On development in distributed computing environment using Hadoop, Scala, Spark and Apache Airflow
- Advanced SQL knowledge and understanding
- Experience architecting solutions in collaboration with development and data science teams.
- Experience working with third party APIs for data collection
- Ability to communicate effectively with, and provide excellent customer service to, stakeholders at all levels of the organization.
- Experience mentoring/coaching business end users and junior analysts
- Self-motivated; able to work independently to complete tasks and collaborate with others to identify and implement solutions.
- Expertise and Hand-On working knowledge of AWS
- Experience with Git/Github or other version control systems
- Familiarity with Snowflake or DataBricks a plus.
Responsibilities
- Develop Data Pipelines using Scala and Spark EMR to move data into our consolidated data lake or data warehouse (Snowflake)
- Develop performant pipelines keeping in mind three factors like quality, scalability and reliability.
- Look for ways to improve processes and take initiative to implement them
- Evaluate new technology and advise on our data lake ecosystem
- Collaborate with cross-functional teams to design and implement impactful solutions to department and business problems
- Supports end users and lead junior teammates on code-related questions and issues.
- Creates and maintains documentation for business end users and other data analysts.
- Determine where in our infrastructure we should house our data based on the use case and data model
- Collaborate with Data Scientists in order to design scalable implementations of their models
Desired Qualifications
- Familiarity with Snowflake or DataBricks