Summary
Overview
Work History
Education
Work Availability
Skills
Quote
Timeline
CustomerServiceRepresentative
Somasekar Makireddy

Somasekar Makireddy

Big Data Engineer/Hadoop Developer
Fremont,CA

Summary

Versatile Big Data Engineer/Hadoop Developer with 10+ years of experience in modern Data Engineering and system analysis, design, development, testing and implementation of projects (SDLC) and proactive team member. Self-directed with expertise in Big Data Technologies(Apache Spark, Hive, HDFC), AWS(EMR,S3,SNS,Lambda ,Secrets Manager, EC2, DynamoDB, Redshift) as well as Programming with Python,Java, Oracle(SQL,PLSQL). Talented at cultivating collaborative and supportive team environment.

Overview

12
12
years of professional experience

Work History

AWS Solution Developer/Data Engineer

SRS Consulting Inc, Experian PLC
Santa Clara , CA
2020.07 - Current
  • Client: Experian
  • Responsible for Design, develop and operate scalable, performance, and operational data pipelines in distributed data processing platforms with AWS Cloud technologies.
  • Developed Spark Applications using EMR and other AWS cloud technologies.
  • Developed and Implemented field level encryption and encryption at rest at S3 bucket for PII data maintenance.
  • Developed add lifecycle rules and tags maintain time to live on S3 objects for PII data.
  • Written JAVA rest API solutions to get OVPIN from Omni View API client.
  • Developed and implemented to maintain and persist job execution details into DynamoDB table.
  • Designed and developed Athena tables to analyze data and perform unload when client want to reprocess same data.
  • Developed and implemented email solution to notify job start, complete or failed status details.
  • Performed POC on airflow to create custom Operators to Copy data from S3 to Snowflake/Redshift and Unload from Snowflake to S3.

Hadoop Developer

SRS Consulting Inc, Cisco Systems Inc
Santa Clara , CA
2019.08 - 2020.07
  • Project/Team Name : Customer Success – Health Index
  • Design and Define Rules for Health Index Metrics for Customer Utilization/Usage of products.
  • Design Data model for Centralized Rule Engine to help users/CSE/ Business owners to change rules/Formulae to Calculate Health Index Metrics.
  • Involved in All Critical Solution designs as SME for Utilization/Usage.
  • Created Physical Data model for Customer Health Index across Finance, Quality, Sentiment and Utilization/Usage.
  • Performing ETL process on HDFS (Hadoop, Spark) , Snowflake/Teradata DB’s and create workflows.
  • Creating Analytics Dashboard and Reporting Application using Python & PySpark.
  • Write business rules and processing scripts for data foundation track using hive and spark scripts for consumption of analytics tools.
  • Implementing Data Science and Machine learning models to achieve overall customer health and health by individual product family.
  • Responsible for unit testing and demo UAT to Biz team.

Solutions Developer

MapR Technologies
Santa Clara, USA , CA
2019.01 - 2019.08
  • Primary responsibility is to provide professional services to MapR customers by designing and building custom big data solutions using following technologies used on on-premises or cloud environments:.Build distributed solutions using Big Data Technologies and its eco systems like HDFS, HIVE and Sqoop.
  • Design and build solutions using Spark Core API’s and Spark DF/SQL, test, deploy and maintain solution.
  • Design and build solutions using No SQL DB HBase and MapR-DB.
  • Design and build solutions using messaging systems Kafka and MapR Streams.
  • Create Extract, Transform & Load (ETL) jobs for moving data from Informatica, Oracle and MySQL using SQL & Sqoop and ingest it into HDFS.
  • Responsible to define and apply appropriate data acquisition and consumption strategies for given technical scenarios using technologies like Sqoop, Kafka and flume.
  • Meet customers, understand their requirements, design and then develop Proof of concept (PoC) implementation using technologies listed above.
  • Work with internal MapR Teams for building solutions based on new technologies/APIs, cross training within MapR-PS and working with Product teams to give feedback.

Hadoop Developer/ETL Developer

Tata Consultancy Services, Cisco Systems Ltd.
Bangalore (India), San Jose , CA
2011.08 - 2019.01
  • I've worked under different projects for Order Management and Account Receivables (OM/AR) and Webex.
  • Build distributed, reliable and scalable data pipelines to ingest and process data in near real-time.
  • Responsible for loading that data into HDFS using Sqoop based on the business requirements and data cleansing and input missing data.
  • Responsible to create spark jobs using Spark Core API’s and Spark DF/SQL and store summarized data files in reporting location.
  • Responsible for writing complex and optimized HQL queries using aggregations, ranking and windowing function and generate normalized data for reporting.
  • Scheduling jobs using Tidal (TES) job scheduler.
  • Troubleshoot and debug any long running jobs or performance issues.
  • Responsible for requirements gathering, data modeling and developing new components and enhance existing components.
  • Wrote database PL/SQL procedures, Functions, Packages. Creating/modifying database tables, triggers and collections.
  • Wrote complex SQL queries and tuning SQL’s and PLSQL code
  • Implemented Apace Kafka Consumers for various topics to receive header and booking information
  • Designed and modified ETL (Informatica) mappings and loading data.

Developer/Support Executive

Oracle Financial Services
Chennai
2010.11 - 2011.07
  • Citi Bank Project Name : TPS 2(Treasury Product Systems
  • Contributed to Database Design and Development.
  • Understanding UNIX shell scripts (from TPS1) and writing equivalent Oracle packages in to TPS2 system.
  • Wrote database PL/SQL procedures, Functions, Packages.
  • Creating/modifying database tables, triggers and collections.
  • Involved in interacting with clients on new requirements, design and developments.
  • Wrote complex SQL queries and tuning SQL’s and PLSQL code.
  • Wrote test cases and contributed for unit testing.

Software Engineer

i Infotech Ltd
Chennai
2008.06 - 2010.11
  • Client: AIG MEMSA Project Name : PREMIA 9 and Premia 10 – KENINDIA Data migration
  • Responsible for analyzing specifications provided by client for new business requirements.
  • Developed PL/SQL Procedures, Packages, Sub Programs, Cursors and Triggers for developing new requirements.
  • Responsible for design and development, testing and implementation of Oracle Forms and Reports.
  • Developed report SQL queries to pulling data from various database tables and Responsible for writing Triggers, Views and synonyms.
  • Developed procedure, functions and packages for data migration and validations.
  • Responsible for writing test cases and Unit testing.

Education

Master of Computer Applications (MCA) -

Sri Venkateshwara University
Tirupati, AP. India

Degree of Bachelor of Science (B.Sc - Computer Science

Sri Venkateshwara University
Tirupati, AP - India

Work Availability

monday
tuesday
wednesday
thursday
friday
saturday
sunday
morning
afternoon
evening
swipe to browse

Skills

Operating Systems : Linux, Windows, MacOS

undefined

Quote

The price of anything is the amount of life you exchange for it.
Henry David Thoreau

Timeline

AWS Solution Developer/Data Engineer

SRS Consulting Inc, Experian PLC
2020.07 - Current

Hadoop Developer

SRS Consulting Inc, Cisco Systems Inc
2019.08 - 2020.07

Solutions Developer

MapR Technologies
2019.01 - 2019.08

Hadoop Developer/ETL Developer

Tata Consultancy Services, Cisco Systems Ltd.
2011.08 - 2019.01

Developer/Support Executive

Oracle Financial Services
2010.11 - 2011.07

Software Engineer

i Infotech Ltd
2008.06 - 2010.11

Degree of Bachelor of Science (B.Sc - Computer Science

Sri Venkateshwara University

Master of Computer Applications (MCA) -

Sri Venkateshwara University
Somasekar MakireddyBig Data Engineer/Hadoop Developer