Summary
Overview
Work History
Education
Skills
Certification
Work Availability
Software
Work Preference
Timeline
Generic

Srinivasulu Putturu

BIGDATA ENGINEER
NOVI,MICHIGAN

Summary

To work in a position where I can utilize experience with product development, big data analytics, data mining, establishing and measuring, cloud and Hadoop architecture and Azure DEVOPS, Unix/Linux, research, training, leadership retention to assist organization in increasing productivity and quality of its services. An effective communicator with excellent relationship building interpersonal skills. Follow effective time management and goal driven approaches. Strong analytical, problem-solving organizational abilities and a good team player. Proficient at developing database architectural strategies at the modeling, design and implementation stages. Responsive expert experienced in monitoring database performance, troubleshooting issues and optimizing database environment. Possesses strong analytical skills, excellent problem-solving abilities, and deep understanding of database technologies and systems. Equally confident working independently and collaboratively as needed and utilizing excellent communication skills. Detail-oriented Bigdata Engineer designs, develops and maintains highly scalable, secure and reliable data structures. Accustomed to working closely with system architects, software architects and design analysts to understand business or industry requirements to develop comprehensive data models. Proficient at developing database architectural strategies at the modeling, design and implementation stages.

Overview

9
9
years of professional experience
1
1
Language

Work History

Big Data Engineer

ADDON TECHONOLGIES INC
03.2023 - Current
  • Total 10+ Years of professional IT experience which includes proven 7+ years of experience in Hadoop Administration on Cloudera (CDH), CDP, Hortonworks (HDP) Distributions, Vanilla Hadoop, MapR and strong experience in AWS, Kafka, Elasticsearch, Devops and Linux Administration and Oracle DBA & RAC.
  • Hands-on experience in installation, configuration, supporting and managing Hadoop Clusters, MongoDB, Redisand Oracle DBA & RAC.
  • Design and manage data on Hadoop cluster End-to-end performance tuning of Hadoop clusters
  • Planning and implementation of data and storage management solutions in Azure (SQL Azure, Azure files, Queue storage, Blob storage). Implementing scripts with PowerShell for Runbooks.
  • Working experience on Windows Active Directory and LDAP
  • Working knowledge and administrator experience of Continuous Integration strategies and tools (such as Jenkins).
  • Ownership for Azure SQL server DB deployment & managed the continuous integration & continuous deployment.
  • Created Build definition and Release definition for Continuous Integration and Continuous Deployment clusters.
  • Resolved Merge Conflicts, configured triggers and queued new builds within the release pipeline.
  • Excellent team player with problem-solving and trouble-shooting capabilities.
  • Knowledge of one of the container technologies (Docker/Kubernetes)
  • Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP
  • Interaction and coordination within a global team on our renovated, greenfield cross-asset-class risk, P&L & Scenario system (“RICE”), which is developed in Scala
  • Develop new and existing modules in Scala while working with developers across the globe
  • Work to continuously improve web development processes and practices
  • Experienced in various SDLC project phases: Requirement/System gathering,
  • Requirement/System Analysis, Functional Specification, Business Logic's, Design, Layered Architecture, Test plans, Coding, Code review, Testing, Performance tuning, Documentation, Implementation and Maintenance.
  • Experience in installation, configuration, maintaining, and monitoring multi mode Hadoop Clusters.
  • Installation of Storm and Kafka on multi node cluster and written Kafka producer to collect Events from Rest API and push them to the broker.
  • Proficient in Cloud based software development/management tools (Ubuntu, Chef/Puppet, Jenkins, Ruby, Zabbix, Nagios, NewRelic, AWS)
  • Created topics on the Desktop portal using Spark Streaming with Kafka and Zookeeper
  • Architect and designed Big data lake on different platforms like IAAS and PAAS Cloud
  • Hands on experience using Cloudera, Hortonworks distributions and Familiar with MapR
  • Good knowledge on HADOOP Ecosystem components.
  • Work together with development teams to improve the overall development productivity.
  • Create, update and manage shell scripts in a Red Hat environment, including operational scripts as well as monitoring scripts
  • Developing templates or scripts to automate everyday developer or operations functions
  • Work closely with the software engineering and product management teams to design, deliver and manage our services with high uptime.
  • Manage the AWS infrastructure and strategic vendor relationships including development firms Reduce routines against very large data sets Monitor Hadoop cluster job performance and capacity planning.
  • Performing Benchmark (Application and Service) and performance testing on clusters
  • Hands on experience to setup own Hadoop environment in Ubuntu, CentOS, Mac OS
  • Experienced in installing Hadoop in the Amazon cloud (EC2)
  • Performing Hadoop day-to-day operations (HDFS, Map-Reduce, HBase, Hive, etc) including operation, deployment and debugging of job issues. Single handedly administering and supporting multiple CDH clusters
  • Provide review and feedback for existing physical architecture, data architecture
  • My job required a deep understanding of the Hadoop ecosystem. I designed the next generation data architecture for the unstructured data, debugging, and analyzing the performance of many map reduces jobs to realize that architecture
  • Debug and solve issues with Hadoop as an on-the-ground subject matter expert. This could include everything from patching components to post-mortem analysis of errors.

Technical Specialist

FIS
04.2022 - 12.2022
  • Cluster will be used for internal and external FIS Global Services and it’s a 50+ nodes CDP cluster.
  • CDP Cluster - HDFS, Yarn, MapReduce, Hive on Tez, Hue, Sqoop, Flume, Kafka, Spark, Oozie. Used Spark for real time streaming data with Kafka.
  • Expertise on setting up Hadoop security, data encryption and authorization using AD Kerberos, TLS/SSL and Ranger policies respectively.
  • Practical knowledge on functionalities of every Hadoop daemon, interaction between them, resource utilizations and dynamic tuning to make clusters available and efficient.
  • Strong knowledge on Hadoop HDFS architecture and Map-Reduce framework.
  • Experience in administering the Linux systems to deploy Hadoop cluster and monitoring the cluster using Grafana and Splunk.
  • Experience in performing backup and disaster recovery of Name node metadata and important sensitive data residing on cluster.
  • Experience in performing minor and major CDP upgrades.
  • Experience in performing commissioning and decommissioning of data nodes on Hadoop CDP cluster.
  • Strong knowledge in configuring Name Node High Availability and Name Node Federation.
  • Familiar with writing Oozie workflows and Job Controllers for job automation - shell, hive, Sqoop automation.
  • Handling and Resolving Client's Issues remotely.

Technical Specialist

HCL Technologies
06.2020 - 03.2022
  • Cluster will be used for internal and external PayPal Group and it’s a 200+ nodes CDP cluster.
  • Hadoop Eco system - HDFS, Yarn, MapReduce, Hive, Hue, Sqoop, Flume, Kafka, Spark, Oozie, NiFi and Cassandra.
  • Experience on Ambari (Hortonworks) for management of Hadoop Ecosystem.
  • Expertise on setting up Hadoop security, data encryption and authorization using Kerberos, TLS/SSL and Apache Sentry respectively.
  • Extensive hands-on administration with Hortonworks.
  • Practical knowledge on functionalities of every Hadoop daemon, interaction between them, resource utilizations and dynamic tuning to make clusters available and efficient.
  • Designed and provisioned Virtual Network Confidential AWS using VPC, Subnets, Network ACLs, Internet Gateway, Route Tables, NAT Gateways
  • Strong knowledge on Hadoop HDFS architecture and Map-Reduce framework.
  • Experienced in developing MapReduce programs using Apache Hadoop for working with Big Data.
  • Experience in administering the Linux systems to deploy Hadoop cluster and monitoring the cluster using Nagios and Ganglia.
  • Experience in performing backup and disaster recovery of Name node metadata and important sensitive data residing on cluster.
  • Architected and implemented automated server provisioning using puppet.
  • Experience in performing minor and major upgrades.
  • Experience in performing commissioning and decommissioning of data nodes on Hadoop cluster.
  • Strong knowledge in configuring Name Node High Availability and Name Node Federation.
  • Familiar with writing Oozie workflows and Job Controllers for job automation - shell, hive, Sqoop automation.
  • Handling and Resolving Client's Issues remotely.

Hadoop Administrator

BOSCH
06.2015 - 08.2019
  • Cluster will be used for internal and external Major Insurance Company Group and it’s a 100+ nodes cluster.
  • Working knowledge of build automation and CI/CD pipelines
  • Work with others to monitor performance of application, create automated system administration monitoring and alert systems and respond to interruption of service events for running service.
  • Experience with Spark or other similar databases. Redis, Cassandra, etc knowledge also helpful
  • Worked with various NoSQL databases
  • Develop automation framework for public cloud infrastructure deployments
  • Strong knowledge of Java or other JVM languages Used Spark for real time streaming the data with Kafka
  • Provide accurate and timely management status of work activity
  • Establish strong working relationship with business, teammates, and others within organization
  • Provide accurate and timely reporting of work hours
    Work directly with external vendors to resolve issues & perform technical tasks Development, Implementation, and Administration within Big Data platform
  • Raise opportunities for enhancements and process improvements
    Design, develop, test, debug, maintain, and document system components
    Basic knowledge of few or eagerness to learn new technologies like
    Be able to adjust quickly to changes
  • Solid knowledge on big data framework: Hadoop, HDFS, Apache Spark, Hive, Map/Reduce and Sqoop etc
  • Excellent communication skills, with the ability to work and contribute to a collaborative working culture
  • Demonstrable knowledge of Continuous Integration/Test Driven Development and Code Analysis including its application within the software development lifecycle
  • Ability to present complex information in an understandable and compelling manner
  • Extensive knowledge and experience in developing, profiling and maintaining multi-threaded/asynchronous applications
    Taking Backup to cloud Storage Account using Cloudberry Cloud Storage Tools. Configure Site to Site VPN Connectivity.
  • Configure Window Failover Cluster by creating Quorum for File sharing in Azure Cloud.
  • Convert existing Virtual Machine from Standard to Premium Storage Account, Patching and Validating of Virtual Machine in Azure.
  • Monitor Azure Infrastructure through System Center Operation Manager (SCOM).
  • Coordinating with Microsoft for increasing subscription limits like- Core limit and Cloud Services.
  • Handling and Resolving Client's Issues remotely.

Education

MBA - Human Resources Management

PRIYADHARSHINI COLLEGE OF ENGINEERING AND TECH
NELLORE
06.2008

Bachelor of Science - Computer Science And Programming

SRI SWANADHRABARTHI DEGREE COLLEGE
Guduru, India
07.2005

Skills

  • Git Version Control
  • Python Programming
  • Data Warehousing
  • NoSQL Databases
  • Tableau Reporting
  • Kubernetes Deployment
  • Data Migration
  • Spark Development
  • Performance Tuning
  • Amazon Web Services
  • Hadoop Ecosystem
  • Jenkins Automation
  • Cloudera Distribution
  • MapReduce Development
  • Hortonworks Distribution
  • SQL and Databases
  • Database Administration
  • Risk Analysis
  • Backup and recovery
  • RDBMS
  • Data Analysis
  • Amazon Redshift
  • Advanced data mining
  • Data Acquisitions
  • Data Aggregation Processes
  • Data repositories
  • Enterprise Resource Planning Software
  • Advanced analytics
  • Relational databases
  • Database Design
  • Apache HBase
  • SQL Programming
  • Metadata Management
  • Big Data Analytics
  • Google Cloud Platform
  • Data Security
  • Apache Kafka
  • Stream Processing
  • Data Lake Management
  • Apache Flink
  • Real-time Processing
  • Docker Containers
  • Data Pipeline Design

Certification

  • Azure Administrator AZ-104
  • AWS Solution Architect Associate
  • AZ-300 Microsoft Azure Architect
  • Oracle Certification OCA

Work Availability

monday
tuesday
wednesday
thursday
friday
saturday
sunday
morning
afternoon
evening
swipe to browse

Software

Hadoop Admin

Oracle DBA

Unix

Cloud Technologies

Oracle DBA & RAC

AWS AZURE GCP

SPARK

KAFKA

Work Preference

Work Type

Full Time

Work Location

On-SiteRemoteHybrid

Timeline

Big Data Engineer

ADDON TECHONOLGIES INC
03.2023 - Current

Technical Specialist

FIS
04.2022 - 12.2022

Technical Specialist

HCL Technologies
06.2020 - 03.2022

Hadoop Administrator

BOSCH
06.2015 - 08.2019

MBA - Human Resources Management

PRIYADHARSHINI COLLEGE OF ENGINEERING AND TECH

Bachelor of Science - Computer Science And Programming

SRI SWANADHRABARTHI DEGREE COLLEGE
Srinivasulu PutturuBIGDATA ENGINEER