Facebook pixel

Senior Devops Engineer
Hadoop, Cloudera, Federal, 2nd Shift
Confirmed live in the last 24 hours
Locations
Remote in USA • Kirkland, WA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Bash
Apache Kafka
Data Analysis
Data Science
Development Operations (DevOps)
Docker
Groovy
Hadoop
Jenkins
Java
Linux/Unix
Microsoft Azure
Perl
Puppet
RabbitMQ
Redis
SQL
Tableau
Terraform
Kubernetes
Python
UI/UX Design
Yarn
Sentry
Ansible
Requirements
  • 4 + years of overall experience with at least 2+ years as a Big Data DevOps / Deployment Engineer
  • Demonstrated expert level experience in delivering end-end deployment automation leveraging Puppet, Ansible, Terraform, Jenkins, Docker, Kubernetes or similar technologies
  • Deep understanding of Hadoop/Big Data Ecosystem. Good knowledge in Querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Streaming and working on systems like HDFS, YARN, Hive, HBase. Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Hue, Tableau, Grafana, MariaDB, and Prometheus
  • Experience securing Hadoop stack with Sentry, Ranger, LDAP, Kerberos KDC
  • Experience supporting CI/CD pipelines on Cloudera on Native cloud and Azure/AWS environments
  • Good knowledge of Perl, Python, Bash, Groovy and Java
  • In-depth knowledge of Linux internals (Centos 7.x) and shell scripting
  • Ability to learn quickly in a fast-paced, dynamic team environment
Responsibilities
  • Collecting, storing, and providing real-time access to large amounts of data
  • Provide real-time analytic tools and reporting capabilities for various functions including:
  • Responsible for deploying, production monitoring, maintaining and supporting of Big Data infrastructure, Applications on ServiceNow Cloud and Azure environments
  • Architect and drive the end-end Big Data deployment automation from vision to delivering the automation of Big Data foundational modules (Cloudera CDP), prerequisite components and Applications leveraging Ansible, Puppet, Terraform, Jenkins, Docker, Kubernetes to deliver end-end deployment automation across all ServiceNow environments
  • Automate Continuous Integration / Continuous Deployment (CI/CD) data pipelines for applications leveraging tools such as Jenkins, Ansible, and Docker
  • Performance tuning and troubleshooting of various Hadoop components and other data analytics tools in the environment: HDFS, YARN, Hive, HBase. Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Hue, Kerberos, Tableau, Grafana, MariaDB, and Prometheus
  • Provide production support to resolve critical Big Data pipelines and application issues and mitigating or minimizing any impact on Big Data applications. Collaborate closely with Site Reliability Engineers (SRE), Customer Support (CS), Developers, QA and System engineering teams in replicating complex issues leveraging broad experience with UI, SQL, Full-stack and Big Data technologies
  • Responsible for enforcing data governance policies in Commercial and Regulated Big Data environments
ServiceNow

10,001+ employees

Cloud-based enterprise operation solutions
Company Overview
ServiceNow’s mission is to transform IT to revolutionize the enterprise by placing a service-oriented lens on the activities, tasks, and processes that make up day-to-day work life. The company is committed to helping modern enterprises operate faster and become more scalable through their platform that optimizes processes, makes work more intuitive, and discovers insights that create new value.
Benefits
  • Generous family leave
  • Flexible PTO
  • Matched Donations
  • Retirement benefits
  • Annual learning stipends
  • Paid volunteer time
Company Core Values
  • Wow our customers
  • Win as a team
  • Create belonging
  • Stay hungry and humble