Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic

Darin Denio

Georgetown,TX

Summary

Seeking a responsible and challenging position as a Staff Linux Engineer

16 years' experience working with Linux and Unix as well as Hadoop Administration. Linux OS Deployment, Patching, Security hardening. Automated software installs with Ansible. Experience with CHD 6.2 cluster installation. Upgraded CDH 6.2 to CDP 7.1.7 CDH 6.2, security configuration for Bank. Kerberos and SSL setup. Corporate LDAP (Microsoft Domain) configuration for cluster. Tuning Hive jobs in CDH 6.2 and CDP 7.1.7. Monitoring Cluster Health in CDH 6.2 and HDP 2.6.4 Trouble shooting HDFS issues including namenode crashes. Upgrading CDH from 6.2.1 to 6.2.3 (Upgrading HDP 2.62 to 2.6.4). Secure CDH Cluster build for BB&T bank. Planning and sizing for Hadoop clusters. Added capacity on live CDH 6.2 clusters. Cloud Ops management (IBM and AWS). Big Data Pipeline AWS Architecture Docker and Kubernetes. Experience with NFS, DHCP, IP, TCP, UDP, TFTP and NTP, Puppet and Kickstart.

Experienced with automating deployment and system configurations, ensuring seamless integration and delivery processes. Utilizes advanced knowledge of cloud platforms and scripting to enhance operational efficiency. Track record of collaborating with teams to maintain high standards and achieve successful project outcomes.

Overview

16
16
years of professional experience
1
1
Certification

Work History

Staff DevOps Engineer – Big Data (Federal Sector)

Service Now Inc
03.2023 - Current
  • Support production Hadoop on Cloudera CDP 7.19 including building new clusters, capacity sizing, trouble shooting jobs. Worked on support KUDU tables for Impala pipeline jobs. Automated maintenance and OS patching. Supported all production Spark Streaming jobs and pipelines. Supported Jenkins pipelines. Used Ansible, Puppet, Terraform, Docker and Kubernetes for automation scripting. Created data replication strategies for quick implementation of new clusters. Monitoring with Grafana and Prometheus. Machine Learning, Anomaly detection and prediction of P1 alerts. Data analytics and deriving Actionable Business Insights. Automation of Continuous Integration / Continuous Deployment (CI/CD) for data pipelines. Daily work with HDFS, Yarn, Impala, Hbase, Spark, Kafka, RabbitMQ, KUDU, Redis, Kerberos, MariaDB and Postgres.
  • Responsible for the enforcement of data governance policies in Commercial and Regulated Big Data Markets.
  • Deployment of Service Now docker containers in Kubernetes. Trouble shooting issues.
  • Automation for OS Patching and Software Deployment using Ansible and pipeline jobs.
  • Resolving Software defects using Scrum and GIT Hub.


Staff Systems Engineer

VISA
07.2021 - 02.2023
  • Worked under the Production Reliability Engineering Team which supports all of Visa's Big Data Platforms. We have over 200 Hadoop clusters and 3 large 2000 node plus clusters. We manage all aspects of these Hadoop clusters including monitoring, trouble shooting job failures, Patching and Maintenance, Automation and OS remediation. Issues can range from missing blocks on the Namenode to KMS timeout problems. We maintain the cluster with Ansible Tower to triggers automation jobs for patching or system changes. We work and develop scripts that monitor and compress log files as well as tools for ease of use.

Big Data Engineer (Consultant)

IBM GBS
04.2019 - 07.2021
  • Mortgage Loan Co. (Client): Helped implement Big Data Pipeline project using IBM Data Analytics Engine using the IBM Cloud. Setup Cloud Object Stores (COS) where customer lands raw data. Deployed IBM Python scripts where data is pulled and ingested into IAE for processing and curation and landed back into Conformed Zone on COS. Implemented Jupyter Hub instance and configured for customer access to Hive using secure protocols. Implemented PGP encrypted for data transfer on top of existing AES256 encryption techniques. Setup Postgres Database for metadata and profile analysis. Implemented automated HDP cluster build using python scripts.
  • Bank (Client): Worked on installing 2 DEV Cloudera CDH clusters. Setup SSL security and Kerberos. Also installed Production CDH cluster. Managed DEV CDH Cluster including the tuning of Hive and Hbase jobs. Worked on setting up Python environment. Worked on monitoring and alert configuration for cluster health. Worked with Big Data developers to troubleshoot python jobs. Configured user security. Installed NIFI and setup environment. Helped troubleshoot and support NIFI jobs. Supported architectural issues with cluster by moving resources for better balance.

Lead DBA / Hadoop Administration

Metlife
11.2017 - 04.2019
  • Supported IBM Big Insights and Hortonworks Hadoop Clusters. Installed and upgraded clusters in PROD, QA, DEV and DR. Administered SOLR, HBASE, HDFS and SPARK. Setup Kerberos security and SSL. Installed HDF NIFI cluster with SSL and Ranger. Setup failover Ambari Server with Replicated Postgres DB. Setup LDAP and authentication. Administered users and group permissions. Administered Talena Backup environment for Hadoop. Setup Ganglia and Ambari Metrics monitoring and Dashboards. Worked with Hortonworks and IBM on all aspects of support. Configured new resources and hosts for horizontal scaling. Helped with Break fix support. Installed Kafka and Flume cluster. Configured Flume jobs. Solr Operations and Tuning.

Hadoop Administrator

BB&T
08.2017 - 11.2017
  • Helped plan and implement Cloudera 5.10 cluster in Test and Production. Installed Cloudera Manager, Hadoop Cluster, SSL Security including Host Level, Data in-motion SSL and Data at rest Encryption. Setup Sentry Rules and ACL's. Replicated Postgres database. Added AD groups via Centrify to Linux hosts and Cloudera Manager and Sentry. Prepared OS for installations including bash scripts to manage package installation. Setup monitoring and Alerting via CM. Created keystores and truststores with Java SSL tools. Configured Kerberos and LDAP.

Technical Support Engineer

Hortonworks Inc
05.2016 - 08.2017
  • Provided technical support for Hortonworks HDP and HDF products. Direct trouble shooting for all Hortonworks customers on HDFS, Yarn, MapReduce, Hive, Oozie, Spark, Kerberos, LDAP, Zeppelin, Ranger, Kafka, CloudBreak and NIFI. Worked with Engineering teams to provide bugfix solutions to all software and java related issues. Provided Weekend on call support and upgrade support. Created Knowledge Base and Hortonworks Community Connection articles (documentation).

Senior System Administrator

Imperva Inc
01.2015 - 05.2016
  • Unix Administrator. Installed Cloudera Manager 5.3. Upgraded 4 VMWare ESXi hosts to 5.5. Upgraded Ping Federate to 7.3 for SAML and SSO environment. Maintained Ping One users and applications. Built and configure large scale backup server for Unix Mainframe using OpenDeDup. Installed Oracle 11i on Solaris and Linux hosts. Investigated new CMS applications and was the Tech Lead for CMS project. Installed and configured Puppet Enterprise server. Implemented Hadoop cluster for R&D. Upgraded Ping Federate to 7.3 and 8.0. Managed Veeam Backups for VMWare hosts. Installed and managed Oracle, MySQL, MSSQL, MariaDB, Informix for R&D team. Installed and managed CentOS Satellite Patching servers. Managed Foreman servers (Puppet). All aspects of RedHat and CentOS automation. Deployed and managed Hortonworks HDP clusters for QA and DEV teams.

Unix System Engineer (Austin Team Lead)

eBay Corp
04.2012 - 12.2014
  • Worked as a 24/7 NOC System Engineer for production site operations. Monitored and supported large scale environment including over 40,000 Linux and Solaris systems in 4 main data centers. Managed Veritas VCS, Oracle and HP systems. Supported Cloudera and Hortonworks Hadoop clusters with over 2000 node systems in 6 sites. Provides scripting and automation. Provided in depth knowledge of Linux OS and kernel issues, file system cleanup and application installations. Provided trouble shooting and escalations of network and SAN related issues. Worked directly with Oracle support engineers as well as Dell and HP. Worked on HP Blade systems. Trouble shooting of kernel level OS issues. Installed system packages and new software packages. Updated Production Unix procedural documentation and process development. Worked with Apache Java (Tomcat) applications providing support and system administration to live production issues. Provided real time monitoring and trouble shooting skills on live production site issues and events. Worked on eBay's infamous Cassini Search engine as well as a previous iteration. Managed over 300 NetApp filers including C mode filer clusters. Puppet system management.
  • Team Lead Responsibilities: Trained new candidates. Managed scheduling. Setup and managed team meetings including cross team communications. Team communications to upper management. Provided technical training to team on Hadoop environment.

Sr System Administrator

Rand Corporation
08.2009 - 09.2012
  • Managed about 250 Unix systems including Red Hat Linux and Sun Solaris and also managed Windows 2003 servers physical and virtual. Deployed production systems. Managed analytic computing software SAS 9.2 for programmers' group. Tech lead for the Backup Discovery project. Implemented new backup infrastructure using Symantec Netbackup/ PureDisk. Managed, installed and upgraded RSA 6.1 systems. Supported Oracle Business Systems which involved the migration of Apache Oasis server to a RHEL5 platform. Configured Apache with LDAP modules. Helped develop Content Atempo Archiving platform. Used Mondo Archiving software for DR purposed. Managed VMWare ESX 4 systems. Helped support Hitachi SAN AMS 2500. Tech Lead for PGP deployment. Installed and configured Triton Disk Array (16TB SAS). Installed, configure and managed HP BL 380 blade servers. Configured NIC bonding, network kernel modules for HP devices. Updated firmware on HP servers. Configured NBU snapshot backups for ESX servers. Configured FC drivers for SAN storage. Updated firmware on all facets of HP Blade and physical servers. Managed NFS, Samba and CIFS shares with NIS and Centrify. Puppet configuration and system management.

Education

No Degree - Computer Science

Santa Monica College
Santa Monica, CA

Skills

  • Strong problem-solving and analytical skills Excellent communication and people skills
  • Scripting languages
  • Kubernetes and container deployment
  • Ansible automation
  • Python and Shell scripting
  • Hadoop expert

Certification

  • AWS Associate Solutions Architect
  • AWS Associate Developer
  • Solr Training Operations, Scaling and Tuning
  • Hortonworks advanced Oozie and Spark training LAB.
  • Splunk Fundamentals 1 & 2 (2019)
  • Hortonworks Ambari and LDAP training LAB.
  • Redhat Enterprise Linux 6 RHCE Course (2011)
  • Custom Configuring, Managing and Maintaining Windows 2008 Server (2010)
  • Hitachi Content Platform and Operation for Users (2010)
  • VMWare Training Course: Virtual Infrastructure with ESX Server Install and Configure
  • Solaris System Performance Management SA-400 – Sacramento CA (March 2009)
  • Sun Solaris 8 System Admin Fast Track Course – Colorado Springs, CO (July 2002)
  • Nasa Unix Administration Certification
  • Nasa IT Security Certification
  • Sun Certification Training. Santa Monica College Santa Monica CA (1996)
  • Oracle DBA Class Santa Monica College - Santa Monica CA (2002)

Timeline

Staff DevOps Engineer – Big Data (Federal Sector)

Service Now Inc
03.2023 - Current

Staff Systems Engineer

VISA
07.2021 - 02.2023

Big Data Engineer (Consultant)

IBM GBS
04.2019 - 07.2021

Lead DBA / Hadoop Administration

Metlife
11.2017 - 04.2019

Hadoop Administrator

BB&T
08.2017 - 11.2017

Technical Support Engineer

Hortonworks Inc
05.2016 - 08.2017

Senior System Administrator

Imperva Inc
01.2015 - 05.2016

Unix System Engineer (Austin Team Lead)

eBay Corp
04.2012 - 12.2014

Sr System Administrator

Rand Corporation
08.2009 - 09.2012

No Degree - Computer Science

Santa Monica College
Darin Denio