Overall, 9+ years of experience in Industry, including 4+Years of experience as Developer using Big Data Technologies like Databricks/Spark and Hadoop Ecosystems and with Development and Implementations of Data Warehousing solutions.
Hands-on experience in Azure Services – Azure Data Bricks (ADB), Azure Data Lake Store (ADLS), Azure SQL DW/DB, Azure Data Factory (ADF) etc.
Excellent knowledge of ADF building components – Integration Runtime, Linked Services, Data Sets, Pipelines, Activities and Triggers.
Big Data architect & strong developer experience for data warehousing projects in Azure cloud, different environments across industry verticals like Sales Compensation, and Automobile.
Experience in performance optimizations that resulted in the cost-effective usage of the Azure Cloud services.
Successful in providing technical assistance to client applications and other various products.
Proficient understanding of service delivery, client integration discipline, coordination & communication between business and technology teams.
Overview
12
12
years of professional experience
1
1
Certification
Work History
Cloud Data – Senior Developer
Federal Deposit Insurance Corporation, FDIC
11.2020 - Current
Develop processing engine using Azure Databricks for data processing
Gather functional Requirements from business team, Create Functional Designs
Use bigdata design patterns to simplify complexity, Create Optimized Technical Design architecture presentations
Data Security during data flow across platforms by enabling encryption and decryption at column level for sensitive data
Develop Azure Data bricks processing engine through well maintained configurable ABCR framework
Adherence to Cyber security policies, data encryption and decryption using built-in UDF’s in Databrick cluster
Maintain code repository and release deployment through Azure DEVOPS through VSTS, Deployed codes to multiple environments with help of CI/CD process, using Dev ops and worked on code defect during SIT and UAT testing and provide supports to data loads for testing; Implemented reusable components to reduce manual interventions
Implemented End-End logging for ABCR frameworks
TOOLS:
Azure Data Factory, Azure Data Bricks, ADLS, Azure DevOps, BLOB, Python, Azure, KeyVault
Senior Cloud Developer
First National Bank
01.2019 - 10.2020
Created Linked Services for multiple source system (i.e., Azure SQL Server, ADLS, BLOB)
Created Pipeline’s to extract data from on premises source systems to azure cloud data lake storage: Extensively worked on copy activities and implemented copy behavior’s such as flatten hierarchy, preserve hierarchy and Merge hierarchy
Implemented Error Handling concept through copy activity
Exposure on Azure Data Factory activities such as Lookups, Stored procedures, if condition, for each, Set Variable, Append Variable, Get Metadata, Filter and wait
Configured logic apps to handle email notification to end users and key shareholders with help of web services activity; created dynamic pipeline to handle multiple source extracting to multiple targets; extensively used azure key vaults to configure connections in linked services
Configured and implemented Azure Data Factory Triggers and schedules
Pipelines: monitored scheduled Azure Data Factory pipelines and configured alerts to get notification of failure pipelines
Extensively worked on Azure Databricks to implement SCD-1, SCD-2 approaches
Implemented delta logic extractions for various sources with help of control table: implemented Data Frameworks to handle deadlocks, recovery, logging data of pipelines
Developing Spark (Python) notebooks to transform and partition data and organize files in ADLS
Working on Azure Databricks to run Spark-Python Notebooks through ADF pipelines
Using Data bricks utilities called widgets to pass parameters on run time from ADF to
Data bricks
TOOLS
Azure Data Factory, Azure Data Bricks, Azure KeyVault
Data Engineer
Amerisave Mortgage
06.2016 - 12.2018
Developed Search Platform on Azure cloud which enables users to perform structured search on unstructured drilling reports
Implementing data ingestion pipelines from multiple data sources using Azure Data Factory
Created ADF pipeline for reading data from on-premises SQL server to azure SQL DB
Hands on delivery of data capture, curation, and consumption pipelines on Azure
Ingestion of Data into Azure Blob containers and Data Lake via Data factory from multiple sources
Perform all pre and post migration checklists tasks
Leverage db utilities to troubleshoot and test databricks notebooks
Supported migrated Apps for specific period till we got production signoffs
Providing Development support for System Testing/User acceptance testing, created and maintained documents for pipeline and other process
Automating data pipelines in ADF by using parameters
File conversion and ingestion by using copy activity
Scripting Edit/Save functionality as per client requirements
Auditing escalations/defects and identifying process gaps and rectifying them
Running Batches to Help improve Efficiency and Quality of processes
TOOLS:
Azure Data Factory, Azure Data Bricks, ADLS, Azure DevOps, BLOB, Python, Azure, KeyVault
Data Engineer
Fresenius Kabi
12.2013 - 05.2016
Developed Big Data Solutions that enabled business and technology teams to make data-driven decisions on best ways to acquire customers and provide them business solutions
Involved in installing, configuring and managing Hadoop Ecosystem components like Hive, Sqoop, PIG
Responsible for loading unstructured and semi-structured data into Hadoop cluster coming from different sources using Flume and managing
Developed MapReduce programs to cleanse and parse data in HDFS obtained from various data sources and to perform joins on Map side using distributed cache
Used Hive data warehouse tool to analyze data in HDFS and developed Hive queries
Created internal and external tables with properly defined static and dynamic partitions for efficiency
Used Avro SerDe’s for serialization and de-serialization packaged with Hive to parse the contents of streamed log data
Implemented Hive custom UDF’s to achieve comprehensive data analysis
Used Pig to develop ad-hoc queries
Exported business required information to RDBMS using Sqoop to make data available for BI team to generate reports based on data.
Data Engineer
Zurich
06.2012 - 11.2013
Created chatbot to interact with structured data in relational databases
Dynamically creating SQL queries based on identified intents from natural language
Integrating chatbot with telegram
Enabled users to perform contextual search on corpus of information available in form of documents
Also used knowledge gained during project for performing Systematic Literature Review (SLR) and Traditional Literature Review (TLR).
Education
M.S - Mechanical Engineering
Cleveland State University
06.2009
B.S - Mechanical Engineering
Anna University
Chennai, TN, India
08.2007
Skills
Quality Assurance Testing
Azure Data Forest (ADF)
Spark, Python, SQL
Databricks, SQL Server Datalake Gen 2, Logic app, Powerbi, SQL Synopsis
Financial Institution Specialist at Federal Deposit Insurance Corporation, FDICFinancial Institution Specialist at Federal Deposit Insurance Corporation, FDIC
Financial Institution Specialist at Federal Deposit Insurance Corporation, FDICFinancial Institution Specialist at Federal Deposit Insurance Corporation, FDIC
Sr. Internal Review Specialist at Federal Deposit Insurance Corporation (FDIC)Sr. Internal Review Specialist at Federal Deposit Insurance Corporation (FDIC)