Sound knowledge and experience in Systems Development Life Cycle (SDLC) and Agile (Scrum)
Software Development.
Designed & implemented BI reporting, Predictive analytics & mining using GLMs (Generalized Linear
Models) for linear, non-linear & logistic Regression via R, Python, Mahout, Solar/Elastic Search, HDFS.
Liberal use of CART
Assisted in creation, verification and publishing of metadata including identification, prioritization,
definition and data lineage capture for Key Data Elements (KDE) and Important Data Elements (IDE).
Hands on experience in Banking Deposits, credit cards, home loans
Excellent experience in Data mining with querying and mining large datasets to discover transition
patterns and examine financial and Healthcare data.
Experience in testing the database to check field size validation, check constraints, stored procedures
and cross verifying the field size defined within the application with metadata.
Experience in developing data applications with Python in Linux/Windows and Teradata
environments.
Experience in monitoring ongoing data quality and troubleshoot complex data quality problems.
Hands on Experience in Developing and maintaining dashboards/reports using Tableau.
Experience in developing scripts using Teradata advanced techniques like Row Number and Rank
Functions.
Experience in Automation Teradata SQL scripts in UNIX.
Automated the SQL scripts using the Jenkins.
Performed Tuning the queries and Troubleshooting the errors of the campaign flowchart
Good exposure to Mainframe Systems and knowledge in handling COBAL files.
Experienced in conducting GAP analysis to identify the delta between the current performance with
the potential performance of the existing software application
Experience in research, analyze and convey complex technical information to diverse end-users at all
levels. Solutions-driven strategist who consistently improves efficiency, productivity and the bottom
line.
Strong experience in using Agile/Scrum methodology and well versed in writing user stories.
Overview
8
8
years of professional experience
Work History
Senior Data Analyst
Truist
Atlanta, GA
02.2020 - Current
Actively involved with Leadership from BB&T sides to Map, analyze and develop the Customer to
account related data driven application to support the Truist (Suntrust+BB&T)
LOB and delivery team so that we keep on rolling the balls and making the progress on scope items
Developed complex SQL scripts ,Temp Table,Stored Procedures, CTE Table ,VIEWS with all the
inclusion/exclusion logic for all the extracts that added very critical values to our business partners
I
CIF extracts: Developed the logic ,complex SQL scripts and Stored Procedure that sends
daily feeds to CIF team which comprises of all the level of information at Account
EDO Account: Created and Developed all the complex scripts,stored procedures to
come up with all the data points so that Financial Crimes can consume it for their BAU
Activity
Iii
FDIC extract: Developed the complex logic for the failure system to consume
Iv
MoneyGuide Pro: All the Transaction level information to MGP team
v
VOC(Voice of Client)
vi
EDO transaction file
Experienced in developing Power BI reports and dashboards from multiple data sources using data
blending
By using Power BI desktop connected to various data sources,working with different visualizations
Developed and published reports and dashboards using Power BI and written effective DAX formulas
and expression
Used Power BI tools to visualize the transaction at CUSIP level for our MoneyGuide Pro Extract
Created and maintained the SQL script to generate all the KPI’s data’s and presented the data extract
to LOB partner as per there demands
Supported our LOB partners on creating all the SQL VIEWS for the tables so that the business
experience is seamless and add more value to the information
Removed and supported business on understanding the data gaps,performing the root cause analysis
on the data gaps and reason behind the data gaps and removing those gaps without any issues
Like
creating the look up table, identifying the contraints and rules so that gaps will not appear in future
Vendor Management Vestmark: Developed all the base scripts using different SQL Objects like CTE
,Stored Procedure,VIEWS to send/receive all those extracts to Vestmark
The total files/extract
sent/received was around 64 files
Created the SQL script, developed the logic,performed manipulation
of the data like plugging in the inclusion/exclusion logic, different window function like RANK,DENSE
RANK to sort out the data also reviewed the data continuously with Vestmark and LOB Partners so that
we would send the correct data and data points
Also, created the separate Database, Tables defined all the business roles, constraints so
that we could load all the files in our daily basis to our tables using different kinds of load criteria like
Truncate and Load,Incremental Load,Delta Load as per LOB/Vestmark confirmation
Performed all the
data cleaning activity like deduping the records by adding the constraints, making some format changes
specially for the date fields
Standardizing the formats of the data as per our Bank or ISO standard
Led the project from Truist team with External Vendor(Vestmark) for Asset Tranfser project , involved f
creating the mapping document,creating the SQL script to get the appropriate segment of data and
data points,collaborated with the Vendor for the inclusion,exclusion logic,package logic,Scheduling and
SLA for the in/out extracts
Developed the SQL Script Logic to extract all the outdated information from Truist System to make sure
we have clean and applicable data into the system
The SunTrust employee id was huge milestone
where I performed the analytics to remove it from Truist system
Involved in detailed analysis of the Customer level, Account level,Transaction level data and reported
to the Stakeholders,LOB for coming up with business decision
Was involved in Merger related activities analyzing, aggregating, segregating the data and presenting
the holistic view of the data for business to make some critical decisions
Develop various data extract logic to be included on file to support the business
Using SQL, Python and
using ETL SSIS
Validated the data from System of Records(SOR) and presented the findings to our Business partners in
regular touch base meetings by creating the Wireframes and creating excel file with relevant data
points
Supported Line of Business Partners/Business on creating all the inclusion/exclusion logics and helping
them end to end on extracting the data from system
Created and developed Power BI dashboards /visualization for the Transactional extracts to be sent to
Financial Crimes to have a clear insights on day to day basis so that some kind of due diligence can be
performed by LOB partners
Involved in developing the extract for FDIC consulting with product owners, leadership to capture the
different business logic and programming it in ETL tools like SSIS to develop the final extract
Owned the different products after the merger campaign was completed, helped business to find the
issue and reported it in the form of exception report
Is being involved the DCM(Data center migration) project where we are moving all the application from
Suntrust data center to Truist data center.(Captured all the technical aspects for the DCM process)
Technical Skills: SQL SERVER,ETL,SSIS,Visual Basic,ALM(testing),Python, Splunk, Power BI
Senior Data Analyst
Capital One
Richmond, VA
01.2019 - 10.2019
Supported the Government Compliant FDIC (Federal Deposit Insurance Corporation) as a Data Engineer
Pulled the data from different cross line of Business including US Card, Commercial, Small Business Bank
Retail Bank,360 Bank
Created and Developed Snowflake views and Redshift views for all the analytics performed so that
business can consume the adhoc queries as per there needs
Exposure of data from different sources and system and had a frequent communication with a data
owners from different Line of Business
Presented data insights by creating static wireframes with all the applicable graphs to my managers
influencing the current state of the balance holdings with Looker
Strong experience in analyzing large amounts of data sets writing PySpark scripts
Created PySpark scripts to load the data from source files (S3 locations) and created the data frames to
perform transformations and aggregations
Created PySpark data frame to bring data from Teradata to Amazon S3
Developed the script to compute the different requirements using different tools like Snowflake, One
lake(Data Lake)
Frequently pulled the data fromOne lake using different big data framework like presto, hive and pyspark
Had a great exposure with pyspark (a python with spark)
Frequently used the EMR (Elastic Map Reduce) AWS services where all the big data framework was pre
installed and used all those frameworks to pull the data from different sources as data was coming fromdifferent platform One Lake, S3 and Snowflake
Frequently used the Cerebro views to consume the data from One lake
Created the Looker Graphs and Visualizations that runs with both the extract and with live data on at LOB
level to display all the holdings and balances
Technical Skills: Snowflake Data Warehouse , Big data Frameworks (Hive, Pyspark, Presto), AWS
Services(S3,EC2,EMR),UNIX shell scripts, Spark
Sr Data Analyst
Capital One
Richmond, VA
06.2017 - 11.2018
Developed Code Using Big Data Software Technologies (Spark, AWS, Databricks) to conduct the analysis
of customer records
Created a different AWS services like EC2 server in Amazon cloud to run the complex Python Code that
process huge chunk of data that was not possible to execute in local machine
Used Python Code and Scripts to read the data from Databricks residing in Amazon S3 storage system so
that we can perform the analytics based upon those data
Created and Executed the Batch Campaigns using the Teradata, Amazon Redshift, IFM.(IFM is the Capital
One inner Source to fulfill the Campaigns)
Developed and Visualized the Dashboard to report with live data on how the campaign is performing at
Customer level
Built 30+ dashboards and reports illustrating different dimensions of the campaign sent to our CreditCard
customers in Different Credit card products
Data Wrangling experience to run the monthly batch campaigns where data used to be sent from
different Line of Business like 360 bank,Retail Bank,Mortagage,Investement etc, so have to import those
data points , standardize the data in same format and perform the initial analysis and run the Python
script/algorithm to get the output of the cross work stream data
Used Informatica ETL tools to track back and capture end to end processing logic of each of the data
points in Teradata Tables
Executed and Wrote the Teradata and Redshift Script for Various types ofCampaign to communicate with
Created and Extracted the source file, Table, transformation, graphs for the Teradata Tables to find the
real sources of data using the Extraction Transaction Language Software tools Like Abinitio
Developed Python Scripts and API( Application Programming Interface) to extract the data from a
databases like Teradata, Redshift
Flat Files brought it through Transformation and model it into human-readable form using the
visualization Software tools like Tableau
Developed software in Python to clean and investigate large, messy data sets of numerical and textual
data
Designed rich data visualizations to communicate complex ideas to customers or company leaders using
Tableau Software
Developed Capital One inner Source "Quantum" which was built using Scala, AWS services like (EC2,S3)
for modernization and moving the data to cloud because Teradata is being decommissioned on NOV 2018
and Capital One wants its Campaign to be executed fast and wants to deliver the products as required by
the customer
Developed Code Using Python and added java script, jsp tags to run High risk transactions portal (internal
to Capital One) to add and save the transactions under manager and admin level
Developed shell script that runs the python code, save the .csv file on EC2, S3 bucket and load the data
into Amazon Redshift database
Developed MultiLoad scripts to load large volumes of data (2.5 to 3 millions) into Teradata database
Developed UNIX shell script to download code from github and run the code from EC2 and added code
in the shell script that automatically picks the code relevant to the process
Developed Software Code Using AWS( Amazon Web service )to move files from EC2 to S3 bucket.
Education
Master of Science - Information Systems (Data Management
VIU
01.2016
Bachelor - Computer Engineering
Purbanchal University Kathmandu
01.2010
Skills
Databases and tools: Oracle, Teradata, MS SQL Server 2008 ,AWS, DB2, MS-Access, ADI, TOAD, SQL
Park,Oracle Reports, MS Excel Reports, MS Access Reports, Business Objects, Brio, Mainframes, JCL,COBOL
PL/SQL, Shell Scripting, VB Script, VBA, Oracle 8i , SAS 8e
Operating Systems: UNIX, MS Windows, Windows XP
Technical Skills:
Teradata utilities (SQL Assistant, BTEQ, Fast Load, Fast Export),Tableau, Python, Snowflake Data
Warehouse, Big data Frameworks(Hive, Pyspark, Presto), AWS Services(S3,EC2),UNIX Shell Scripts, Spark
Additional Information
7+ years Experience on Data Analytics/Reporting Analyst/Business Analytics.
In -depth experience in Snowflake,Teradat,Redshift,MS SQL,AWS, UNIX, SAS, Spark, Tableau, Python,
Oracle,Power BI.
Worked closely with the Stakeholders,LOB,Business Partners, Vendors for the requirement gathering,
problem statements, enhancement,bugs fixing, defects and address those through the delivery team.
Had a great exposure with different big data frameworks like pyspark, hive and presto.
Have a sound knowledge on pulling the data using EMR(Elastic map reduce) and great exposure on
consumption of data using Cerebro view
Experience in Data Modeling, Data Analysis, Data Conversion Validation, Report Creation, Data
Conversion, Data Check, Data Migration, Data Governance Training and Support.
Timeline
Senior Data Analyst
Truist
02.2020 - Current
Senior Data Analyst
Capital One
01.2019 - 10.2019
Sr Data Analyst
Capital One
06.2017 - 11.2018
Master of Science - Information Systems (Data Management