Full-Time

Senior Analytics Engineer Data Products

Cloud Data Engineering, Snowflake Data Engineering

Posted on 5/12/2026

Zensar

Zensar

No salary listed

Pune, Maharashtra, India

Remote

Remote role; candidate location India; EST/IST/UK overlap times apply.

Category
Data & Analytics (2)
,
Required Skills
Python
Git
SQL
Data Engineering
Databricks
Snowflake
Requirements
  • 6 to 10 years in data engineering, analytics engineering, or a closely related role.
  • Databricks experience, including hands-on work with Unity Catalog, Delta Lake, and SQL or notebook-based development.
  • Python and SQL at an engineering level. You write production-quality transformation code, not just ad hoc queries.
  • Solid understanding of medallion architecture (bronze to silver to gold) and when to use each layer.
  • Experience building and supporting semantic layers, data catalogs, or self-service data products in production.
  • Track record building shared, cross-domain datasets that are used by multiple teams, not just a single reporting use case.
  • Strong stakeholder management. You can align definitions across product, actuarial, and engineering partners and make practical tradeoffs when requirements conflict.
  • Comfortable with modern engineering workflow: Git-based version control, code review, and basic testing or validation before release.
  • Strong written communication. You will write requirements documents and technical specs, but you are also expected to build and ship the work.
Responsibilities
  • Partner on Unity Catalog operations: Work with the Data Engineer to keep catalogs and schemas organized, tighten naming conventions, and implement permission patterns that match how actuarial users work. When access or discoverability is broken, you help fix it and document the pattern.
  • Partner to deliver the silver and gold layer: Work with the Data Engineer to design transformation logic, define table and metric definitions, review outputs, and validate results with actuarial users. You will contribute directly (SQL, notebooks, and documentation), but the Data Engineer is the primary owner for production pipelines and releases.
  • Build cross-product aggregated datasets: Implement canonical datasets that join and roll up measures and dimensions across products and lines of business. Optimize for consistent definitions, good performance, and wide reuse.
  • Operate datasets like products: Version key tables, set clear expectations (freshness, completeness, schema stability), and communicate changes before they break downstream workflows. Make it easy for other teams and products to depend on your data.
  • Build and publish the semantic layer: Implement metric definitions, a business glossary, and curated datasets, then publish examples so users can self-serve in Databricks. Iterate based on questions you get and what people actually use.
  • Partner on data contracts and quality checks: Work with the Data Engineer on contracts, schema checks, and lineage so downstream workflows can trust the data. You help define what “good” looks like, add documentation, and support triage when something breaks.
  • Support self-service and answer questions: Publish examples and lightweight documentation so users can query curated data safely (Databricks Genie and Databricks SQL). You are not expected to onboard users one-by-one, but you will answer questions, unblock teams, and incorporate feedback into the datasets.
  • Keep documentation in sync with production: Maintain dataset definitions, column-level documentation, and metric standards as the tables evolve. The goal is a source of truth people actually trust and use.
Desired Qualifications
  • Experience with Databricks Genie or AI-BI features.
  • Familiarity with MCP (Model Context Protocol), LLM tool calling, or AI agent patterns.
  • Background in financial services, insurance, or reinsurance data.

Company Size

N/A

Company Stage

N/A

Total Funding

N/A

Headquarters

N/A

Founded

N/A

Your Connections

People at Zensar who can refer or advise you