10+ years of experience in design, development, and implementation of robust technology systems, with specialized expertise in Hadoop Administration and Linux Administration. 6+ years of experience in Hadoop Administration & Big Data Technologies with cloud (Azure with insight & Google cloud and Aws) and 4 years of experience into Linux administration Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems. Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Horton works, Cloudera. Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting. Design Big Data solutions for traditional enterprise businesses. Excellent command in creating Backups & Recovery and Disaster recovery procedures and Implementing BACKUP and RECOVERY strategies for off-line and on-line Backups. Involved in bench marking Hadoop/ HBase cluster file systems various batch jobs and workloads. Making Hadoop cluster ready for development team working on POCs. Experience in minor and major upgrades of Hadoop and Hadoop eco system. Experience monitoring and troubleshooting issues with Linux memory, CPU, OS, storage, and network. Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause. Experience on Commissioning, Decommissioning, Balancing and Managing Nodes and tuning server for optimal performance of the cluster. As an admin involved in Cluster maintenance, trouble shooting, Monitoring, and followed proper backup & Recovery strategies. Good Experience in setting up the Linux environments, Password less SSH, creating file systems, disabling firewalls, Swappiness, Selinux and installing Java. Good Experience in Planning, Installing and Configuring Hadoop Cluster in Cloudera and Horton works Distributions. Installing and configuring Hadoop eco system like pig, hive. Hands on experience in Installing, Configuring, and managing the Hue and HCatalog. Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa from enterprise data lake. Experience in importing and exporting the logs using Flume. Optimizing performance of HBase/Hive/Pig jobs. Hands on experience in Zookeeper and ZKFC in managing and configuring in Name Node failure scenarios. Handsome experience in Linux admin activities on RHEL & Cent OS. Experience in deploying Hadoop 2.0(YARN). Familiar with writing Oozie workflows and Job Controllers for job automation.