Summary
Overview
Work History
Education
Tech Skills & Tools
Timeline
Generic

Rajender D

Draper,UT

Summary

Having 10+ years of professional experience with a strong background in technology-related roles. Proficient in software development, system administration, and technical support. Skilled in problem-solving and optimizing performance. Capable of managing projects and collaborating effectively with teams. Committed to continuous learning and staying current with industry trends to contribute to organizational success. Expertise in providing user support, debugging, programming, incident management, quality assurance and application support.

Overview

13
13
years of professional experience

Work History

Hadoop Support Engineer

Goldman Sachs
Draper, UT
01.2018 - Current
  • Developed code/scripts for UI/APIs supporting Goldman Sachs' Data Lake infrastructure.
  • Deployed fixes for Data Processing, API, and Data Warehouse issues.
  • Automated data deletion from HDFS and IQ Sybase warehouse.
  • Migrated data across different IQ Sybase warehouses.
  • Created ETL framework using Python and Hive for data extraction and vendor negotiations.
  • Implemented Sybase IQ Kerberization for client authorization.
  • Utilized the SPARK ecosystem for data analysis and developed POCs with Spark-SQL.
  • Made config changes to optimize data ingestion and refining.
  • Analyzed and resolved Lake Infrastructure issues for stakeholders.
  • Ensured uptime and responsiveness of Hadoop Clusters and other Lake components.
  • Troubleshot HDFS and warehouse-related issues within SLA.
  • Monitored lake infrastructure using tools like Splunk and Kibana.
  • Assisted store and warehouse owners in onboarding datasets to Lake.
  • Analyzed capacity-related issues and worked on infrastructure performance tuning.
  • Conducted daily sanity testing and disaster recovery activities.
  • Communicating with clients technically leads to preventing escalations.

Hadoop Engineer

T-Mobile
Bellevue, WA
08.2015 - 12.2017
  • Designed and implemented Hadoop-based solutions to efficiently process and analyze large-scale datasets
  • Developed custom MapReduce programs to perform complex data transformations and aggregations
  • Optimized Hadoop cluster performance by tuning configurations and troubleshooting issues
  • Worked closely with data scientists and business analysts to understand requirements and deliver scalable solutions
  • Implemented data security and governance policies to ensure compliance with industry regulations
  • Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem.
  • Optimizing the performance of application wide modules.
  • Define and apply appropriate data acquisition and consumption strategies for given technical scenarios. Build and incorporate automated unit tests, and participate in integration testing efforts.
  • Developing, maintaining, and providing support for the Projects in all the phases, including Requirements Elicitation, Application Architecture definition, and Design.
  • Optimizing the performance of application wide modules using Pig.

IT Executive

IFFCO TOKIO GIC
Gurgaon, HR, IND
08.2011 - 12.2013
  • Used Hibernate ORM tool as persistence Layer - using the database and configuration data to provide persistence services (and persistent objects) to the application.
  • Implemented Oracle Advanced Queuing using JMS and Message driven beans. Responsible for developing DAO layer using Spring MVC and configuration XML's for Hibernate and to also manage CRUD operations (insert, update, and delete).
  • Implemented Dependency injection of spring framework. Developed and implemented the DAO and service classes.
  • Developed reusable services using BPEL to transfer data.
  • Participated in Analysis, interface design and development of JSP.
  • Configured log4j to enable/disable logging in application.
  • Developed Rich user interface using HTML, JSP, AJAX, JSTL, J2EE, JavaScript, JQuery and CSS.
  • Implemented PL/SQL queries, Procedures to perform database operations.
  • Prepared UNIX Shell scripts and used UNIX environment to deploy the EAR and read the logs.
  • Implemented Log4j for logging purpose in the application. Implemented agile development methodology.
  • Involved in code deployment activities for different environments.
  • Providing Business Intelligence Solutions, data Extraction Transform and Loading (ETL) using SQL Server Integration Services (SSIS)

Education

Master of Science - Computer Science

Northwestern Polytechnic University
Fremont, CA
05-2015

Tech Skills & Tools

  • Shell Scripting, Core Java, J2EE, , Pig, MapReduce, Python, Slang
  • SQL(HiveQL), Sybase IQ, MemSQL, TeraData
  • Procmon P3, Splunk, Zipkin, Pulse, ME portal, Aqua DS, Prometheus, Alloy streaming, Snowflake, Datastream, Datafactory, SFX, Kanban, Autosys, Jira, IMS, Confluence, KeepAlive

Timeline

Hadoop Support Engineer

Goldman Sachs
01.2018 - Current

Hadoop Engineer

T-Mobile
08.2015 - 12.2017

IT Executive

IFFCO TOKIO GIC
08.2011 - 12.2013

Master of Science - Computer Science

Northwestern Polytechnic University
Rajender D