Summary
Overview
Work History
Education
Skills
Websites
Certifications And Awards
Timeline
Generic

Naresh Miriyalu

Dallas,TX

Summary

Results driven professional offering a progressive career in IT industry with 12 plus years of strong experience on Big Data Ecosystems including data engineering, Unix, SQL, Shell Scripting,Snowflake and Python. Strong experience in Big Data Technologies such as HDFS, Map Reduce Framework, HIVE, PIG, Spark, Sqoop, Flume, Kafka, oozie and spark MLIB. Proficient in working with various technologies like Unix, Oracle, SQL, and Big data Technologies (Hive, Sqoop, Spark, Pig, Map Reduce, Flume, Storm). Strong experience in Python and involved libraries. Proficient in AWS cloud Technologies Amazon S3, EMR ,S3,AWS Glue , KMS . Proficient in all phases of SDLC (analysis, design, development and deployment) and highly competent in gathering user requirements technologies and converting them into software requirement specifications using Object Oriented Techniques. Strong experience to work with JMS technologies like IMB WebSphere MQ, Kafka. Working experience on DevOps process like Jenkins for CI/CD, GitHub for Development. Strong experience on Distributed Server Operations, Incident, Change and Problem management. Skilled to work on different platforms like Big Data, Linux, and cloud like Amazon Web Services. Ability to work independently as well as in a team environment which involves highly complex issues. Leadership skills on leading teams, project delivery, and deal effectively with all teams which includes businesses and work collaboratively.

Overview

12
12
years of professional experience

Work History

Senior Data Engineer

Apple
04.2022 - Current
  • Implemented data driven products and solutions supply chain integration to enhance business value using Apache Spark
  • Create Spark data pipelines for all job processing queries which are written in PL/SQL
  • Convert Existing jobs written in PL/SQL to Pyspark which handles huge volume of data
  • Integrate code written with AWS cloud s3 platform, which will read data from S3 and write data back to S3
  • Perform unit testing for code written in Pyspark and run test cases against each functionality
  • Perform performance testing and ensure to tune code to increase performance of job functionality.
  • Designed and implemented effective database solutions and models to store and retrieve data
  • Convert Existing jobs written in PL/SQL to Snowflake Scripts which handles huge volume of data.
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability
  • Integrate Snowflake Scripts with Airflow for scheduling jobs
  • Actively involved in production software deployments by working closely with team members to ensure proper transition of system changes into production environments
  • Collaborated on insights with other Data Engineers, Business Analyst and Partners
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.
  • Technologies: Spark, Python, SQL, PL/SQL, AWS S3, AWS EMR, Oracle, Snowflake.

Senior Data Engineer

Wells Fargo
12.2017 - 03.2022
  • Work with huge volumes of data (Using BIG DATA ecosystem such as (HDFS, Hive, Spark, Kafka) to derive business intelligence Execution of manual ingestions in case of any job failures after batch interval
  • Analyze data, uncover information, derive insights, and propose data driven strategies
  • Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm
  • Implemented Apache PIG scripts to load data to Hive and working Hue interface for Loading data into HDFS and querying data
  • Talend Framework for implementing ETL to source data from different upstream Database and ingest same into EDL platform
  • Processing data using Spark written in Java and Python
  • Worked with Various HDFS file formats like Avro, parquet files, and various compression formats like snappy, ORC
  • Involved in importing data using Sqoop from traditional RDBMS like oracle, MySQL including Teradata to Hadoop Environment
  • Involved in Data Migration activities from Oracle to Hadoop Platform (HBASE)
  • Working on Cloudera Distribution as all applications hosted on Cloudera platform
  • Working on various techniques to improve batch performance runtime
  • Expertise in Data structures, distributed computing, manipulating, and analyzing complex high volume data from variety of internal and external sources
  • Environment: Linux
  • Technologies: Big data Ecosystems, Spark, Hive, SQL, Unix, Python and Shell Scripting.

Senior Data Engineer

Wells Fargo
12.2017 - 03.2022
  • Extracting data from Oracle database using Sqoop and ingesting data into HBASE (NO SQL) database
  • Processing Real-time streaming data using Kafka and Spark streaming
  • Set up authentication and authorization mechanisms, integrating Kafka with Kerberos
  • Managed topics, partitions, and replication to ensure efficient data distribution in Kafka.
  • Conducted performance tuning and optimization of Kafka clusters for low latency and high throughput.
  • Schedule job using OOZIE to run scripts for every time frame, this takes data from NoSQL database and relevant analysis
  • Creating Sqoop Jobs to automate Sqoop Ingestion.
  • Designed and implemented effective database solutions and models to store and retrieve data
  • Complex hive queries to calculate parameters which helps to categorize transactions
  • Working on Cloudera Distribution as all applications hosted on Cloudera platform
  • Analyzed complex data and identified anomalies, trends, and risks to provide useful insights to improve internal controls
  • Environment: Linux
  • Technologies: Bigdata Ecosystems, Spark, Hive, SQL, Unix, Python Shell Scripting, Sqoop and Oozie.

Senior Software Engineer

Neuberger Berman
02.2015 - 07.2017
  • Analyzing Oracle Packages, Procedures, Functions related to Reported issues from business and provide calculation part to business
  • Spotting defect in code and raising defect CR accordingly and work to fix
  • Managing Unix script level issues analyzing, modifying, tuning script to cope with production environment
  • Handling issues with various up loaders and exporters for importing security issuer information, security ratings, security analytics, market price, FX rates, benchmark from various market vendor systems
  • Participating in Release management which ensures version change of software product
  • Analyze Shell scripts and take necessary steps to resolve job failures
  • Environment: Linux
  • Technologies: Oracle, SQL, PL/SQL, Unix, Shell Scripting.

Associate Software Engineer

Deutsche Bank
08.2011 - 02.2015
  • Changing PL/SQL code in order to handle Report issues related to Compliance reports, Calculation issues
  • Automating adhoc report generation and provide to client using Core Java
  • Working closely with USA clients to deliver projects using Java and PL SQL
  • Creating sequence jobs using Control-M to help client requirements
  • Environment: Linux
  • Technologies: Oracle, SQL, PL/SQL, Unix, Shell Scripting.

Associate Software Engineer

Deutsche Bank
08.2011 - 02.2015
  • Understanding customer requirements and business processes involved
  • Responsible for developing Core Java application using Multithreading, Collection frameworks
  • Responsible for developing stand-alone Web application using Spring framework
  • Used Hibernate framework for communicating with databases
  • Creating Control – M jobs by generating XML files in order help support teams
  • Environment: Linux
  • Technologies: Oracle, SQL, PL/SQL, Unix, Shell Scripting and Java.

Education

PG Program in Big Data Engineering -

BITS Pilani & Upgrad
Bangalore, India
11.2019

Master of Computer Applications (MCA) -

Sikkim Manipal University
Bangalore, India
08.2014

Bachelor of Computer Applications (BCA) -

National Degree College
Bangalore, India
05.2011

Skills

  • Big Data Ecosystem(Spark , Kafka , Hive , Sqoop, Map Reduce framework , Hadoop , Distributed Processing)
  • Python Programming
  • SQL , PL/SQL
  • ETL /Data Warehousing Concepts
  • AWS Technologies
  • Data analysis
  • Technology leadership work streams
  • Excellent Communication
  • Attention to Detail

Certifications And Awards

ITIL v3 Certified, Got “Championship” award for innovation at Wells Fargo., Got “Cash price” award for winning a capital markets gaming quiz at Wells Fargo.

Timeline

Senior Data Engineer

Apple
04.2022 - Current

Senior Data Engineer

Wells Fargo
12.2017 - 03.2022

Senior Data Engineer

Wells Fargo
12.2017 - 03.2022

Senior Software Engineer

Neuberger Berman
02.2015 - 07.2017

Associate Software Engineer

Deutsche Bank
08.2011 - 02.2015

Associate Software Engineer

Deutsche Bank
08.2011 - 02.2015

PG Program in Big Data Engineering -

BITS Pilani & Upgrad

Master of Computer Applications (MCA) -

Sikkim Manipal University

Bachelor of Computer Applications (BCA) -

National Degree College
Naresh Miriyalu