Team Size : 10 Working as a Mainframe/Hadoop Engineer in the Transportation Project for LMS (Locomotive Mgmt. System) LOCOGPS & GIS system which deals with different types of Locos, pings and geographical Locations of Latitude-Longitude in different locations. Representing North America's freight railroads and Amtrak. Strives to help make the rail industry increasingly safe, efficient and productive. I was involved in developing the existing LMS application upon client request and also involved in Designed Distributed Architecture for new data. LMS System (Locomotive Management System) is that which maintains and provides timely accurate physical description on the national fleet. These equipment contains Locomotives Locomotive Features and also GPS pings from these locomotives etc. Job Responsibilities:
- Collaborated with business partners and team members to develop specifications, implement business solutions, and resolve problems.
- Run Hive queries in Hadoop and run queries in DB2 on mainframe to verify all the data is loaded into Hadoop.
- Worked on importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa.
- Worked effectively as a team with peers, end-users, and management to ensure process of deliverables at all stages of the project life cycle.
- Managed task allocation and team activities.
- Effectively handled the role of an Onsite coordinator and ensured that a high quality product is always delivered to the customer.
- Prepared the error handling document to maintain the Error handling process.
- Managed the team to get the incident tickets and problem tickets fixed during the conversion.
- Imported all the Locomotive specific data to Hadoop using Sqoop component of Hadoop.
- Analyzed the existing core mainframe application and also identifying the rules defined on DB2 database.
- Planned and coordinated testing across multiple teams, tracked and reported status, created testing cycle plan, scenarios etc.
- Participated in knowledge sharing sessions with the end customers about the new version.
- Troubleshoot data issues, validated result sets, recommended and implemented process improvements.
- Responsible for analyzing Data Pipeline to load data from sources such as IBM Mainframes and Oracle using sqoop along with Kafka and Spark Frameworks as per the requirements.
- Supported the team in all phases of the project with business and technical expertise.
- Supported very complex mainframe and Hadoop applications effectively.
- Prepared Reports and Excel information for the Deployment of Mainframe to Hadoop Code.
- Involved in the Defect analysis call for UAT environment along with users to understand the data and to make any modifications if suggested by the user.
Environment: Cloudera Hadoop, HDFS, Hive, Sqoop, Flume, Apache Kafka, UNIX, Cloudera Manager, Hbase, SQL, IBM Mainframe with Z O/S, COBOL, JCL, IMS, DB2, CICS, FOUCS, Stored Procs, Changeman, IBM DB2 Data Studio, GTB, Fault Analyzer, JIRA.