Summary
Overview
Work History
Education
Skills
Timeline
Generic

Sramika Sriramaneni

Irving,TX

Summary

  • Over 5+ years of IT experience with a strong emphasis on the design, development, implementation, testing, and maintenance of software applications in Hadoop, HDFS, MapReduce, Hadoop Ecosystem, ETL, and RDBMS.
  • Experience in data analysis, data mining, acquisition, validation, visualization, and discovering meaningful business insights from large datasets of structured and unstructured data.
  • Proficient in developing Spark applications using Spark RDD, Spark-SQL, and DataFrame APIs.
  • Experience building distributed high-performance systems using Spark and Scala.
  • Skilled in optimizing MapReduce algorithms and handling large datasets using Spark's in-memory capabilities.
  • Expertise in writing MapReduce jobs in Python for processing large sets of structured, semi-structured, and unstructured data stored in HDFS.
  • Proficient in developing and implementing ETL processes using tools such as Informatica, Pentaho, and Syncsort.
  • Experience with data modeling, including dimensional and relational concepts.
  • Skilled in developing Pig scripts and Hive Query Language (HQL).
  • Proficient with Amazon WebServices (AWS) and Microsoft Azure, including services like EC2, S3, RDS, VPC, IAM, Elastic Load Balancing, Auto Scaling, CloudFront, CloudWatch, SNS, SES, SQS, Azure Data Factory, Azure Databricks, and Azure DevOps.
  • Strong experience with SQL and NoSQL databases, including MySQL, MS SQL Server, DB2, Oracle, MongoDB, Cassandra, and HBase.
  • Experienced in agile methodologies, including extreme programming, SCRUM, and Test-Driven Development (TDD).
  • Excellent communication and presentation skills.

Overview

4
4
years of professional experience

Work History

Product Specialist

Cognizant Technology Solutions, CTS
2022.08 - 2022.12
  • Gathered requirements from business users to accommodate changing needs.
  • Developed Spark scripts using Python in the PySpark shell command.
  • Migrated MapReduce programs to Spark transformations using Spark and Scala.
  • Implemented Spark scripts using Scala, and Spark SQL for faster processing of data.
  • Developed Python and PySpark scripts for data transformation and loading.
  • Created MapReduce jobs using Hive and Pig.
  • Profiled structured, unstructured, and semi-structured data to identify patterns and implement data quality metrics.
  • Used Informatica PowerCenter for ETL processes.
  • Employed Spark SQL and Python for data processing and real-time Spark Streaming with Kafka.
  • Enhanced ETL architecture for data integration and performance improvement.
  • Developed Tableau visualizations using Cross tabs, Heat maps, and more.
  • Implemented Apache Airflow for ETL pipeline automation.
  • Migrated on-premises applications to AWS and used AWS services like EC2 and S3.
  • Analyzed SQL scripts and designed solutions using PySpark.

Environment: Spark, Scala, AWS, ETL, Hadoop, Python, Snowflake, HDFS, Hive, Tableau, MapReduce, PySpark, Pig, Docker, JSON, SQL, MongoDB, Microsoft Azure, Azure Data Factory, Agile, Windows.

Software Engineer

Bajaj Finserv
2021.05 - 2022.06
  • Understood client requirements and application flow.
  • Developed Spark streaming applications to pull data from the cloud to Hive.
  • Optimized data processing with Spark.
  • Developed Scala workflows for data transformation.
  • Created Hive tables and implemented partitions and bucketing.
  • Developed story-telling dashboards in Tableau.
  • Utilized AWS services for big data analytics and data warehousing.
  • Used Azure Data Factory for ETL processes.
  • Developed Pig Latin scripts for data manipulation.
  • Written MapReduce programs for data extraction and transformation.
  • Worked with NoSQL databases like MongoDB and HBase.
  • Followed agile methodology.

Environment: Spark, Scala, AWS, Azure, ETL, Kafka, Tableau, Hadoop, Python, Snowflake, HDFS, Hive, MapReduce, PySpark, Pig, Docker, Sqoop, Teradata, JSON, MongoDB, SQL, Agile, Windows.

Software Engineer

Foray Software Private Ltd
2018.07 - 2021.05
  • Collaborated with Business Analysts and SMEs to gather business requirements.
  • Developed Spark applications using Scala for data enrichment.
  • Developed Python and Scala code for data processing and analytics.
  • Implemented Spark transformations and actions.
  • Developed Pig UDFs for data manipulation.
  • Logged defects in Jira and Azure DevOps tools.
  • Developed Tableau reports based on user requirements.
  • Worked with Azure services like HDInsight, BLOB, Data Factory, and Logic Apps.
  • Migrated on-premise Oracle ETL processes to Azure Synapse Analytics.
  • Developed ETL processes using Azure Databricks.
  • Followed agile methodology.

Environment: Spark, Scala, Hadoop, Azure, Python, PySpark, AWS, MapReduce, Pig, ETL, HDFS, Hive, HBase, SQL, Agile, Windows.

Education

Master of Science -

Clark University
Worcester, MA
05.2024

Skills

  • Databases: Snowflake, AWS RDS, Teradata, Oracle, MySQL, Microsoft SQL, PostgreSQL.
  • NoSQL Databases: MongoDB, Hadoop HBase, Apache Cassandra.
  • Programming Languages: Python, SQL, Scala, Java, MATLAB.
  • Cloud Technologies: AWS, Azure, Docker.
  • Data Formats: CSV, JSON, Parquet, XML.
  • Querying Languages: SQL, NoSQL, PostgreSQL, MySQL, Microsoft SQL.
  • Integration Tools: Jenkins.
  • Scalable Data Tools: Hadoop, Hive, Apache Spark, Pig, MapReduce, Sqoop.
  • Operating Systems: RedHat Linux, Unix, Windows, macOS.
  • Reporting & Visualization: Tableau, PowerBI.

Timeline

Product Specialist

Cognizant Technology Solutions, CTS
2022.08 - 2022.12

Software Engineer

Bajaj Finserv
2021.05 - 2022.06

Software Engineer

Foray Software Private Ltd
2018.07 - 2021.05

Master of Science -

Clark University
Sramika Sriramaneni