Summary
Overview
Work History
Education
Skills
Accomplishments
Certification
Additional Information
Languages
Work Availability
Quote
Timeline
Hobbies - volley ball , Cricket , Djing , Browsing
Hi, I’m

Bharadwaj Nalluri

Charlotte,NC
Bharadwaj Nalluri

Summary

Organized and dependable candidate successful at managing multiple priorities with a positive attitude. Willingness to take on added responsibilities to meet team goals.

Detail-oriented team player with strong organizational skills. Ability to handle multiple projects simultaneously with a high degree of accuracy.

To seek and maintain full-time position that offers professional challenges utilizing interpersonal skills, excellent time management and problem-solving skills.

Overview

5
years of professional experience
1
Certificate

Work History

Wells Fargo

Senior Big Data Engineer
06.2021 - Current

Job overview

  • PyFARM is python-based finance risk and modeling platform instance created to onboard multiple models like Auto, Loans and Credit
  • Working on Big-data infrastructure for batch processing as well as real-time processing
  • Responsible for building scalable distributed data solutions using Spark, Python, and Cloudera -HWX
  • Worked with team on developing Django APIs in backend to run different machine learning algorithms
  • Users have been given feasibility to launch Jupiter Hub fort their easy development process
  • Worked and developed end to end Archiving and Retrieval data from HDFS and NAS/SAN to S3 bucket storage
  • Automated jobs using Autosys tool (i.e., works like scheduler like Airflow)
  • Worked on creating jobs and scheduling jobs using AIRFLOW
  • Built series of Spark Applications and Hive scripts to produce various analytical datasets needed for digital marketing teams
  • Worked extensively on building and automating data ingestion pipelines and moving terabytes of data from existing data warehouses to cloud
  • Worked extensively on fine tuning spark applications and providing production support to various pipelines running in production
  • Worked closely with business teams and data science teams and ensured all requirements are translated accurately into data pipelines.

Capital one

AWS Bigdata Engineer
01.2020 - 06.2021

Job overview

  • Goal of this project is to build modernized platform for Capital One partners to meet their analytical needs and generate financial reports that can be accessed via Capital One Partnership Portal, which also serves future Capital One Partners.
  • Participated in system development life cycle from requirements analysis through system implementation.
  • Performed advanced engineering in configuration, management and deployment of AWS cloud environments.
  • Performed first-level incident response and service resolution for cloud systems.
  • Bulk loading from the external stage (AWS S3), internal stage to snowflake cloud using the COPY command.
  • Loading data into snowflake tables from the internal stage using snowsql.
  • Used COPY, LIST, PUT and GET commands for validating the internal stage files.
  • Used import and Export from the internal stage (snowflake) from the external stage (AWS S3).
  • Writing complex snowsql scripts in snowflake cloud data warehouse to business analysis and reporting.
  • Used SNOW PIPE for continuous data ingestion from the S3 bucket.
  • Maintain overall Data Lake system and meeting client needs including design, configuration, development, testing, releasing to production activities
  • Worked on developing Data Ingestion, Validation and Transformation framework by using Hadoop, Python and PySpark.
  • Working on monitoring platform using Splunk monitoring tool and creating dashboards, setting up alerts according to requirements
  • Write Transformation queries as per business rules document
  • Build Airflow jobs to run ingestion and transformation jobs
  • Developed Spark based pipelines using spark data frame operations to load data to EDL using EMR for jobs execution & AWS S3 as storage layer
  • Experience in developing and scheduling Spark applications in databricks using PySpark for data extraction, transformation and aggregation
  • Performed data analytics using PySpark on Databricks platform
  • Worked on full spectrum of data engineering pipelines: data ingestion, data transformations and data analysis/consumption
  • Developed AWS lambdas using Step functions to orchestrate data pipelines
  • Worked on automating infrastructure setup, launching and termination EMR clusters etc
  • Implemented Continuous Delivery pipeline with Bitbucket and AWS AMI's
  • Implemented usage of Amazon Web Services cloud and made use of various services such as Amazon Elastic Cloud Compute (EC2) and Amazon Elastic Map Reduce (EMR) for computational tasks and Simple Storage Service (S3)

Trimble

Hadoop/Spark Developer
04.2014 - 08.2015

Job overview

  • Enterprise Cloud Data Platform (ECDP) is firm wide cloud data lake platform used to ingest, validate and to transform on-premises data to cloud using AWS cloud platform
  • Spark, Glue, Python, Redshift, Athena, AWS S3, Airflow, Bitbucket
  • Created proofs of concept for innovative new solutions.
  • Investigated new and emerging software applications to select and implement administrative information systems.
  • Developing design documents considering all possible approaches and identifying best of them
  • Responsible to manage data coming from different sources
  • Developing business logic using Python
  • Good analytical, communication, problem solving skills and adore learning new technical, functional skills

Education

Cumberland's University
Williamsburg, KY

Master of Science from Computer Science

University Overview

JNTUA
India

Bachelor of Science from Computer Science Engineering

University Overview

Skills

  • Technical Skills: -
  • Big Data - HDFS, MapReduce, Hive, Sqoop, Spark, PySpark, Snowflake, Spark Streaming, Kafka , Databricks
  • Cloud - AWS
  • Programming - Python
  • Monitoring Tool - Splunk, Data Dog
  • Scripting - Shell, Python
  • Source Control - GitHub, Bitbucket
  • Databases - Oracle DB
  • IDE - IntelliJ, Eclipse, Visual Studio Code
  • Scheduling Tools - AutoSys, Airflow, Arrow
  • DevOps - Jenkins, UCDeploy, SonarQube, JFrog

Accomplishments

Accomplishments
  • Collaborated with team of 20 members in the development of PDF( Partner file delivery).
  • Developed an in-house auditing tool, to audit the certs installed across different cloud accounts and send the data to DataDog & show the expiring certs data in a dashboard at DataDog, this solution was developed using python & hosted on Lambda

Certification

AWS Certified Solutions Architect

Additional Information

Additional Information
  • Good Experience on JSON, CSV, LOG data processing using Spark. Worked on multiple Big Data production implementations, proposals and provided POC on different client engagements across Big Data stack. Worked on monitoring the platform using Splunk monitoring tool and creating the dashboards, setting up the alerts according to the requirements. Proactive in learning and leveraging emerging technologies to enhance productivity. Pivotal in analyzing business & technical requirements and simultaneously deriving testing requirements.

Languages

English
Full Professional
Telugu
Native or Bilingual
Hindi
Elementary
Tamil
Limited Working
Availability
See my work availability
Not Available
Available
monday
tuesday
wednesday
thursday
friday
saturday
sunday
morning
afternoon
evening
swipe to browse

Quote

It is never too late to be what you might have been.
George Eliot

Timeline

Senior Big Data Engineer
Wells Fargo
06.2021 - Current
AWS Bigdata Engineer
Capital one
01.2020 - 06.2021
Hadoop/Spark Developer
Trimble
04.2014 - 08.2015
Cumberland's University
Master of Science from Computer Science
JNTUA
Bachelor of Science from Computer Science Engineering

Hobbies - volley ball , Cricket , Djing , Browsing

Hobbies - volley ball , Cricket , Djing , Browsing
  • Love to play Volley ball in weekends.
  • Have own Cricket Team in Charlotte - Named Deccan Chargers.
  • Djing - Night outs with closed buddies.
  • Browsing - interested to learn new tech stack, and politics.
Bharadwaj Nalluri