Strategic and results-oriented Data & Solution Architect with 18+ years of expertise in designing and delivering enterprise-scale data engineering, warehousing, and ETL solutions. Deep proficiency in Informatica PowerCenter, Oracle, and complex data transformation across insurance and healthcare sectors.
Over the past 3+ years, led the transition to modern cloud data architectures on Azure and AWS. Hands-on experience architecting and implementing robust, scalable data platforms using Azure Data Factory, Azure Synapse Analytics, Databricks, Delta Lake, Python, and PySpark. Defined cloud migrations, enabling CI/CD automation through Azure DevOps, and aligning data strategy with business outcomes.
Overview
9
9
years of professional experience
1
1
Certification
Work History
Lead Solution Architect
Independent Health Association Inc.
10.2016 - Current
Cloud Data Architect / Engineer
Contributed to the architecture and development of a Python-based automated data validation framework to streamline data quality checks across cloud pipelines
Led multiple proof-of-concept (PoC) initiatives utilizing Azure Data Factory, Synapse Analytics, and Databricks to define the migration strategy from on-premises data warehouses to the Azure cloud ecosystem
Designed and implemented Databricks and Synapse notebooks to assess and drive the migration approach for legacy Informatica mappings and on-prem enterprise data warehouse (EDW) workloads
Collaborated in cross-functional architectural discussions to define CI/CD practices and establish version control standards using Azure DevOps for data pipeline development and deployment
ETL Solution Architect
Designed and implemented end-to-end ETL solutions using Informatica PowerCenter, ensuring data integrity, performance, and scalability
Created detailed source-to-target mapping (STM) documents to support development and maintain traceability of business requirements
Authored comprehensive application process flow documentation
Provided technical leadership and mentorship to ETL developers, working closely with solution architects to deliver high-quality data integration artifacts
Participated in requirement elicitation and collaborated with BAs and SMEs to review and finalize mapping specifications
Conducted in-depth data scenario analysis and facilitated brainstorming sessions with business analysts and key stakeholders to validate requirements and edge cases
Led and participated in code review sessions and Supported QA and UAT testing by providing technical expertise to ensure code quality
Delivered walkthrough of data flows and application processes to business users and prod support teams, and prepared Standard Operating Procedures (SOPs)
Accomplishments:
Extract Process Automation: Designed ETL process automation framework to generate data extracts for Self-Funded line of business. This framework provided standard requirement collection process, and developed steps that reduced 80% of design, development and testing efforts.
QA test automation (Python): Designed Pythonic framework to test data extracts for all line of businesses. Which improves the quality of testing, reduced the defects to production and saves at least 50% of Unit testing and QA testing efforts.
Education
Bachelor of Science - Electronics Engineering
Pune University
India
08-1998
Skills
Cloud Platforms: Azure Data Factory, Azure Synapse, Databricks, Data Lake Gen2, Azure DevOps
Languages: Python, PySpark, SQL, PL/SQL, T-SQL and Shell Scripting
Databases: Oracle, Azure SQL DB, SQL Server
Data Modeling: Star/Snowflake Schema, Dimensional Modeling, Slowly Changing Dimensions (SCD)
Version Control / DevOps : Git, Azure DevOps, CI/CD pipelines
ETL Tools: Informatica PowerCenter, Informatica Cloud and Tidal job scheduler
Other Tools: Power BI (basics), Databricks (basics), JSON, Parquet, csv
Research Assistant at Department of Computer Science and Engineering, State University of New York at BuffaloResearch Assistant at Department of Computer Science and Engineering, State University of New York at Buffalo