Facebook pixel

Staff Engineer
Devops & Release, Big Data, Federal
Confirmed live in the last 24 hours
Locations
Remote in USA • Belmont, MA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Node.js
Apache Spark
AWS
Bash
Data Analysis
Data Science
Development Operations (DevOps)
Docker
Groovy
Hadoop
JavaScript
Jenkins
Kafka
Git
Linux/Unix
Management
Maven
Microsoft Azure
Perl
Puppet
RabbitMQ
Redis
Tableau
Terraform
Kubernetes
Python
Yarn
Sentry
Ansible
Requirements
  • 6+ years of overall experience with at least 3+ years as a Big Data DevOps / Release Engineer
  • Expert level experience in Software Configuration Management (SCM) and Build tools such as Git, GitLab, Nexus, Maven, Grunt and Node.js
  • Demonstrated experience in many of these technologies - Jenkins, Ansible, Terraform, Puppet, Docker, Kubernetes orchestration and similar technologies
  • Demonstrated experience involving automating builds, release and configuration management system of non-prod and production environments using RPMs and scripting languages like Perl, Python, Bash and Groovy
  • Good knowledge in Querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Streaming and working on systems like HDFS, YARN, Hive, HBase. Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Tableau, Grafana and Prometheus
  • Experience securing Hadoop stack with Sentry, Ranger, LDAP, Kerberos KDC
  • Experience supporting CI/CD pipelines on Cloudera on Native cloud and Azure/AWS environments
  • In-depth knowledge of Linux internals (Centos 7.x) and shell scripting
  • Ability to learn quickly in a fast-paced, dynamic team environment
  • BS Degree in Computer Science or equivalent experience
Responsibilities
  • Collecting, storing, and providing real-time access to large amount of data
  • Provide real-time analytic tools and reporting capabilities for various functions including:
  • Building, deploying, maintaining, and managing Big Data applications based on established best practices and ensuring availability, performance, scalability, and security of the Big Data systems. This is achieved using Software Configuration Management (SCM) and Build tools such as Git, GitLab, Nexus, Maven, Grunt and Node.Js
  • Establishing Continuous Integration, and Continuous Deployment (CI/CD) pipelines for applications using tools such as Jenkins, Docker and Ansible
  • Providing production support to resolve critical build and release issues and mitigating or minimizing any impact on Big Data applications. Providing support to Development, QA and System engineering teams in replicating complex issues leveraging experience with Build tools, Jenkins, CI/CD, Ansible etc
  • Enhancing build and release tools with new requirements from different stakeholders by ensuring that the technical specifications meet business needs. This involves automating build, release, deployment and configuration management system of non-prod and production environments using RPMs and scripting languages like Perl, Python, Bash and Groovy
  • Supporting Cloudera based production Big Data analytics platform which includes resolving incident tickets created by Site Reliability Engineers (SRE)
  • Performance tuning and troubleshooting of various Hadoop components and other data analytics tools in the environment: HDFS, YARN, Hive, HBase. Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Kerberos, Tableau, Grafana and Prometheus
  • Responsible for enforcing data governance policies in Commercial and Regulated Big Data environments
  • Perform production monitoring and support for Big Data infrastructure and Big Data applications in ServiceNow cloud and Azure cloud
ServiceNow

10,001+ employees

Digital workflows
Company Overview
ServiceNow provides cloud-based solutions that define, structure, manage, and automate services for enterprise operations.