Data Engineer
Posted on 9/22/2023
INACTIVE
Qarik Group

51-200 employees

Accelerates client's cloud transformation journeys
Company Overview
Qarik Group, a Google Cloud Premier Partner, offers a unique blend of expertise from Xooglers, Wall Street veterans, and delivery experts, providing a competitive edge in accelerating Cloud Native practices for clients. The company's hands-on approach has led to improved engineering productivity and rapid response to market changes for numerous clients. Qarik's focus on transitioning from data center native to cloud native models not only accelerates a company's transformation journey but also enhances their competitive advantage and ability to attract top engineering talent.
Consulting

Company Stage

Seed

Total Funding

N/A

Founded

2019

Headquarters

New York, New York

Growth & Insights
Headcount

6 month growth

0%

1 year growth

-10%

2 year growth

-20%
Locations
New York, NY, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
BigQuery
Data Analysis
Google Cloud Platform
Hadoop
Java
R
MySQL
Postgres
SQL
Tableau
Apache Beam
Apache Flink
Python
Looker
CategoriesNew
Data & Analytics
Requirements
  • Effective communication, interpersonal and organizational skills
  • Self motivate/manage your own tasks and projects to agreed roadmap
  • Previous experience as a data engineer or in a similar role working with big data pipelines
  • Strong experience with Data Warehousing (e.g. BigQuery) and Transactional DBs (such as Spanner, AlloyDB, MySQL, Postgres, etc.)
  • Strong experience with visualization, reporting and analytics tools such as Looker and Tableau
  • Knowledge of other big data tools such as Hadoop, Flink, etc. a plus
  • Fluency in SQL or similar data manipulation syntax for relational databases
  • Technical expertise in some relevant programming language (e.g. SQL, python, R, java)
Responsibilities
  • Guide the client through their data journey with a thorough knowledge of the GCP data product offerings and ecosystem.
  • Write Data Ingest and ETL integrations and code to make sure the ETL process performs optimally across streaming and batch modes (e.g. Apache Beam)
  • Write efficient and modular SQL to allow optimal querying of complex questions
  • Design, build and deliver visualization capabilities such as reporting and dashboarding via off-the-shelf systems as well as custom developed metrics visualization (e.g. with Looker)
  • Architect, create and manage technological infrastructure of a data platform
  • Evaluate business needs and objectives; generate ideas for data innovation to meet clients’ business goals; enable business leaders to do exploratory analytics
Desired Qualifications
  • Knowledge of other big data tools such as Hadoop, Flink, etc.
  • Experience with programming languages like Python, R, or Java