Design and prepare tick data for ingestion into Snowflake, collaborating with internal and external teams to ensure best practices are followed.
Fine-tune Snowflake database performance, managing partition keys and indexes while minimizing costs and optimizing data consumption for applications.
Build data pipelines for applications, including End-of-Day bulk feeds, on-demand APIs, and Snowflake data sharing.
Improve tick data offerings by researching technologies, loading Level 2 data, and optimizing time-binned data access.
Explore, design, and architect potential new storage layers and database architectures that could outperform Snowflake for managing tick history data. Lead proof of concept (POC) initiatives to evaluate the feasibility, performance, and scalability of these alternative solutions.
Design and architect scalable APIs and data connector frameworks.
Leading the architecture, design, development, and launch of high available, low latency, flexible and scalable APIs.
Mentor and assist other junior engineers in design, implementation, and code reviews.
Stay up to date with advancements in Big Data technologies and data warehousing tools.
Ensure adherence to data security, privacy, and compliance standards.
Drive innovation by evaluating emerging tools and technologies that could improve data management and performance.
Determine operational feasibility by evaluating analysis, problem definition, requirements, solution development and proposed solutions.
Demonstrate a thorough knowledge of data structures and algorithms, object-oriented programming, and software engineering principles.
Ensuring that deployed products are properly maintained throughout their lifecycle.
This position requires to participate in on-call support on a rotation basis with the team.
10+ years of related work experience with a focus on data engineering and cloud data platforms.
Experience in building RESTful APIs, data pipelines, and managing bulk data processes.
Expertise in database architecture, performance tuning, and data ingestion techniques.
Demonstrated expertise in Python and related scripting languages to automate complex data workflows, optimize ETL processes, and develop robust data integration solutions.
Strong analytical skills with the ability to make data-driven decisions.
Experience working with any relational database.
Experience working with Cloud infrastructure (AWS, Azure etc.)
Experience working with Snowflake or related technologies.
Knowledge of financial market data, tick data, and Level 2 data is a big plus.
Familiarity with various levels of testing - unit, regression, integration, and load is a plus.
Experience with finance and financial market data is a plus.
Experience with Big Data technologies (e.g., Spark, Hadoop) is a plus.
Understanding of Java and frameworks like Spring Boot is a plus.
Familiarity building RESTful services is a plus.
Experience with NoSQL storage is a plus.
Experience with AWS tools and technologies is a plus.
Experience with multithreaded, caching, and high availability software development is a plus.
Self-starter with deep interest in learning new technologies and taking different approaches to solving complex problems.
Strong written and interpersonal communication skills to interact with business analysts, peers, and management.
Able to work as part of a geographically diverse team as an individual contributor and within a team as well.
Ability to lead and take full ownership of assigned tasks.
Ability to articulate and quickly adopt development best practices.
Ability to create and review documentation.
Bachelor’s Degree or equivalent in Computer Science or related field
The budgeted salary range for this position in the states of Connecticut and New York is $160,000.00 - $210,000.00