Uploaded File
add photo
Mohammed
mohammedayazm1@gmail.com
917-960-9533
New York City, NY 10002
Hadoop Data Engineer
9 years experience C2C
0
Recommendations
Average rating
68
Profile views
Summary

Hadoop Developer with 7+ Years of IT experience including 5 years in Big Data and Analytics field in Storage, Querying, Processing and Analysis for developing E2E Data pipelines. Expertise in designing scalable Big Data solutions, data warehouse models on large-scale distributed data, performing wide range of analytics.
• Expertise in all components of Hadoop/Spark Ecosystems - Spark, Hive, Pig, Flume, Sqoop, HBase, Kafka, Oozie, Impala, Stream sets, Apache NIFI, Hue, AWS.
• 3+ years of experience working in programming languages Scala/Python.
• Extensive knowledge on data serialization techniques like Avro, Sequence Files, Parquet, JSON and ORC.
• Acute knowledge on Spark architecture and real-time streaming using Spark.
• Hands on experience with Spark Core, Spark SQL and Data Frames/Data Sets/RDD API.
• Good knowledge on Amazon Web Services (AWS) cloud services like EC2, S3, EMR and VPC.
• Experienced in Data Ingestion, Data Processing, Data Aggregations, Visualization in Spark Environment.
• Hands on experience in working with large volume of Structured and Un-Structured data.
• Expert in migrating the code components from SVN repository to Bit Bucket repository.
• Experienced in building Jenkins pipelines for continuous code integration from Github into Linux machine. Experience in Object Oriented Analysis Design (OOAD) and development.
• Good understanding in end-to- end web applications and design patterns.
• Hands on experience in application development using Java, RDBMS, and Linux shell scripting.
• Experience in implementing by using agile methodology. Well versed in using Software development methodologies like Agile Methodology and Waterfall processes.
• Experienced in handling databases: Netezza, Oracle and Teradata.
• Strong team player with good communication, analytical, presentation and inter-personal skills.

Experience
Hadoop Data Engineer
Information Technology
Aug 2018 - present
Moorestown, NJ
Responsibilities:
• Designed data models and data flow diagrams for various insights.
• Designed hive tables for schema and overall approach of processing data in hive.
• Analyzed and processes the data according to the need of every insight.
• Developed aggregation logics using Spark Scala to calculate results of insight.
• Applied performance improvement techniques like partitioning, bucketing, parquet file format.
• Developed various hql scripts and UNIX scripts for insights.
• Developed UNIX scripts for automation of project.
• Worked with testing team to solve the defects raised by them.
• Performed various tuning for running Spark Scala code with high volume.
• Developed solutions to big data problems utilizing common tools found in the ecosystem.
• Developed solutions to real-time and offline event collecting from various systems.
• Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
• Analyzed massive amounts of data and help drive prototype ideas for new tools and products.
• Designed, build and support APIs and services that are exposed to other internal teams
• Employ rigorous continuous delivery practices managed under an agile software development approach
• Ensure a quality transition to production and solid production operation of the software
• Test-driven development/test automation, continuous integration, and deployment automation
• Enjoy working with data - data analysis, data quality, reporting, and visualization
• Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
• Great design and problem solving skills, with a strong bias for architecting at scale.
• Adaptable, proactive and willing to take ownership.
• Keen attention to detail and high level of commitment.
• Good understanding in any: advanced mathematics, statistics, and probability.
• Experience working in agile/iterative development and delivery environments. Comfort in working in such an environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
• Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters with agile methodology.
• Monitored multiple Hadoop clusters environments using Ganglia, monitored workload, job performance and capacity planning using Cloudera Manager. Environment: AWS, Core, Kinesis, IAM, S3/Glacier, Glue, DynamoDB, SQSm, StepFunctions, Lambda, API, Gateway, Cognito, EMR, RDS/Auora, CloudFormation, CloudWatch, Python, Scala/Java, Spark, Batch, Streaming, ML, Performance tuning at scale, Hadoop, Hive, HiveQL, YARN, Pig, Scoop, Ranger, Real-time Streaming, Kafka, Kinesis, Avro, Parquet, JSON, ORC, CSV, XML, NoSQL/SQL, Microservice development, RESTful API development, CI/CD
Agile Methodology API Development AWS AWS CloudFormation AWS S3 Big Data Cloudwatch Continuous Deployment Continuous Integration Data Analysis Data Engineering Ganglia Hadoop Hive Java JSON MongoDB Performance Tuning Pig Python Spark SQL Systems Engineering UNIX XML
Remove Skill
Hadoop Data Engineer
Information Technology
Aug 2018 - present
Moorestown, NJ
Responsibilities:
• Designed data models and data flow diagrams for various insights.
• Designed hive tables for schema and overall approach of processing data in hive.
• Analyzed and processes the data according to the need of every insight.
• Developed aggregation logics using Spark Scala to calculate results of insight.
• Applied performance improvement techniques like partitioning, bucketing, parquet file format.
• Developed various hql scripts and UNIX scripts for insights.
• Developed UNIX scripts for automation of project.
• Worked with testing team to solve the defects raised by them.
• Performed various tuning for running Spark Scala code with high volume.
• Developed solutions to big data problems utilizing common tools found in the ecosystem.
• Developed solutions to real-time and offline event collecting from various systems.
• Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
• Analyzed massive amounts of data and help drive prototype ideas for new tools and products.
• Designed, build and support APIs and services that are exposed to other internal teams
• Employ rigorous continuous delivery practices managed under an agile software development approach
• Ensure a quality transition to production and solid production operation of the software
• Test-driven development/test automation, continuous integration, and deployment automation
• Enjoy working with data - data analysis, data quality, reporting, and visualization
• Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
• Great design and problem solving skills, with a strong bias for architecting at scale.
• Adaptable, proactive and willing to take ownership.
• Keen attention to detail and high level of commitment.
• Good understanding in any: advanced mathematics, statistics, and probability.
• Experience working in agile/iterative development and delivery environments. Comfort in working in such an environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
• Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters with agile methodology.
• Monitored multiple Hadoop clusters environments using Ganglia, monitored workload, job performance and capacity planning using Cloudera Manager. Environment: AWS, Core, Kinesis, IAM, S3/Glacier, Glue, DynamoDB, SQSm, StepFunctions, Lambda, API, Gateway, Cognito, EMR, RDS/Auora, CloudFormation, CloudWatch, Python, Scala/Java, Spark, Batch, Streaming, ML, Performance tuning at scale, Hadoop, Hive, HiveQL, YARN, Pig, Scoop, Ranger, Real-time Streaming, Kafka, Kinesis, Avro, Parquet, JSON, ORC, CSV, XML, NoSQL/SQL, Microservice development, RESTful API development, CI/CD
Agile Methodology API Development AWS AWS CloudFormation AWS S3 Big Data Cloudwatch Continuous Deployment Continuous Integration Data Analysis Data Engineering Ganglia Hadoop Hive Java JSON MongoDB Performance Tuning Pig Python Spark SQL Systems Engineering UNIX XML
Remove Skill
Hadoop Developer
Information Technology
Oct 2017 - Jul 2018
Edison, NJ
Responsibilities:
• Experience with Hadoop Ecosystem components like HBase, Sqoop, ZooKeeper, Oozie, Hive and Pig with Cloudera Hadoop distribution.
• Developed PIG and Hive UDF's in java for extended use of PIG and Hive and wrote Pig Scripts for sorting, joining, filtering and grouping the data.
• Worked with NoSQL databases like HBase for creating HBase tables to load large sets of semi structured data coming from various sources.
• Expand programs in Spark based on the application for faster data processing than standard MapReduce programs.
• Elaborated spark programs using Scala, involved in creating Spark SQL Queries and Developed Oozie workflow for spark jobs.
• Prepared the Oozie workflows with Sqoop actions to migrate the data from relational databases like Oracle, Teradata to HDFS.
• Creating Hive tables, dynamic partitions, buckets for sampling, and working on them using HiveQL.
• Used Sqoop to store the data into HBase and Hive.
• Enumerated Hive queries to do analysis of the data and to generate the end reports to be used by business users.
• Worked on scalable distributed computing systems, software architecture, data structures and algorithms using Hadoop, Apache Spark and Apache Storm etc. and ingested streaming data into Hadoop using Spark, Storm Framework and Scala.
• Good experience with NOSQL databases like MongoDB.
• Written spark python for model integration layer.
• Experienced in handling large datasets using Spark in Memory capabilities, using broadcasts variables in Spark, effective & efficient joins, transformations and other capabilities.
• Elaborated Spark code and Spark-SQL/Streaming for faster testing and processing of data.
• Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data.
• Developed a data pipeline using Kafka, HBase, Mesos Spark and Hive to ingest, transform and analyzing customer behavioral data. Environment: Hadoop, HDFS, CDH, Pig, Hive, Oozie, ZooKeeper, HBase, Spark, Storm, Spark SQL, NoSQL, Scala, Kafka, Mesos, Mango DB.
Apache Data Integration Hadoop Hadoop Developer Hbase HDFS Hive Java MapReduce MongoDB Oozie Oracle Pig Python Spark SQL Sqoop Teradata
Remove Skill
Hadoop Developer
Information Technology
Oct 2017 - Jul 2018
Edison, NJ
Responsibilities:
• Experience with Hadoop Ecosystem components like HBase, Sqoop, ZooKeeper, Oozie, Hive and Pig with Cloudera Hadoop distribution.
• Developed PIG and Hive UDF's in java for extended use of PIG and Hive and wrote Pig Scripts for sorting, joining, filtering and grouping the data.
• Worked with NoSQL databases like HBase for creating HBase tables to load large sets of semi structured data coming from various sources.
• Expand programs in Spark based on the application for faster data processing than standard MapReduce programs.
• Elaborated spark programs using Scala, involved in creating Spark SQL Queries and Developed Oozie workflow for spark jobs.
• Prepared the Oozie workflows with Sqoop actions to migrate the data from relational databases like Oracle, Teradata to HDFS.
• Creating Hive tables, dynamic partitions, buckets for sampling, and working on them using HiveQL.
• Used Sqoop to store the data into HBase and Hive.
• Enumerated Hive queries to do analysis of the data and to generate the end reports to be used by business users.
• Worked on scalable distributed computing systems, software architecture, data structures and algorithms using Hadoop, Apache Spark and Apache Storm etc. and ingested streaming data into Hadoop using Spark, Storm Framework and Scala.
• Good experience with NOSQL databases like MongoDB.
• Written spark python for model integration layer.
• Experienced in handling large datasets using Spark in Memory capabilities, using broadcasts variables in Spark, effective & efficient joins, transformations and other capabilities.
• Elaborated Spark code and Spark-SQL/Streaming for faster testing and processing of data.
• Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data.
• Developed a data pipeline using Kafka, HBase, Mesos Spark and Hive to ingest, transform and analyzing customer behavioral data. Environment: Hadoop, HDFS, CDH, Pig, Hive, Oozie, ZooKeeper, HBase, Spark, Storm, Spark SQL, NoSQL, Scala, Kafka, Mesos, Mango DB.
Apache Data Integration Hadoop Hadoop Developer Hbase HDFS Hive Java MapReduce MongoDB Oozie Oracle Pig Python Spark SQL Sqoop Teradata
Remove Skill
Hadoop Developer
Information Technology
Aug 2016 - Sep 2017
Oak Brook, IL
Responsibilities:
• Involved in Discussions with business users to gather the required knowledge.
• Analyzing the requirements to develop the framework.
• Designed and developed architecture for data services ecosystem spanning Relational, NoSQL and Big Data technologies.
• Loaded and transformed large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.
• Developed Java Spark streaming scripts to load raw files and corresponding.
• Processed metadata files into AWS S3 and Elasticsearch cluster.
• Developed Python Scripts to get the recent S3 keys from Elasticsearch.
• Elaborated Python Scripts to fetch/get S3 files using Boto3 module.
• Implemented PySpark logic to transform and process various formats of data like XLSX, XLS, JSON, TXT.
• Built scripts to load PySpark processed files into Redshift Db and used diverse PySpark logics.
• Developed scripts to monitor and capture state of each file which is being through.
• Designed and Developed Real Time Stream processing Application using Spark, Kafka, Scala and Hive to perform Streaming ETL and apply Machine Learning.
• Developed Map Reduce programs to cleanse the data in HDFS obtained from heterogeneous data sources.
• Involved in scheduling Oozie workflow engine to run multiple Hives and pig jobs and used Oozie Operational Services for batch processing and scheduling workflows dynamically.
• Included migration of existing applications and development of new applications using AWS cloud services.
• Wrought with data investigation, discovery and mapping tools to scan every single data record from many sources.
• Implemented Shell script to automate the whole process.
• Extracted data from SQL Server to create automated visualization reports and dashboards on Tableau.
• Responsible for Cluster maintenance, adding and removing cluster nodes, Cluster Monitoring and Troubleshooting, Managing and reviewing data backups & log files. Environment: AWS S3, Java, Maven, Python, Spark, Kafka, Elasticsearch, MapR Cluster, Amazon Redshift Db, Shell script, Boto3, pandas, Elasticsearch, certifi, PySpark, Pig, Hive, Oozie, JSON.
AWS Big Data Database Backups Elasticsearch ETL Hadoop Hadoop Developer HDFS Hive Java JSON Machine Learning MapReduce Maven Metadata MongoDB Oozie Pig Python Shell Scripts Spark SQL SQL Server Tableau Application Development
Remove Skill
Hadoop Developer
Information Technology
Aug 2016 - Sep 2017
Oak Brook, IL
Responsibilities:
• Involved in Discussions with business users to gather the required knowledge.
• Analyzing the requirements to develop the framework.
• Designed and developed architecture for data services ecosystem spanning Relational, NoSQL and Big Data technologies.
• Loaded and transformed large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.
• Developed Java Spark streaming scripts to load raw files and corresponding.
• Processed metadata files into AWS S3 and Elasticsearch cluster.
• Developed Python Scripts to get the recent S3 keys from Elasticsearch.
• Elaborated Python Scripts to fetch/get S3 files using Boto3 module.
• Implemented PySpark logic to transform and process various formats of data like XLSX, XLS, JSON, TXT.
• Built scripts to load PySpark processed files into Redshift Db and used diverse PySpark logics.
• Developed scripts to monitor and capture state of each file which is being through.
• Designed and Developed Real Time Stream processing Application using Spark, Kafka, Scala and Hive to perform Streaming ETL and apply Machine Learning.
• Developed Map Reduce programs to cleanse the data in HDFS obtained from heterogeneous data sources.
• Involved in scheduling Oozie workflow engine to run multiple Hives and pig jobs and used Oozie Operational Services for batch processing and scheduling workflows dynamically.
• Included migration of existing applications and development of new applications using AWS cloud services.
• Wrought with data investigation, discovery and mapping tools to scan every single data record from many sources.
• Implemented Shell script to automate the whole process.
• Extracted data from SQL Server to create automated visualization reports and dashboards on Tableau.
• Responsible for Cluster maintenance, adding and removing cluster nodes, Cluster Monitoring and Troubleshooting, Managing and reviewing data backups & log files. Environment: AWS S3, Java, Maven, Python, Spark, Kafka, Elasticsearch, MapR Cluster, Amazon Redshift Db, Shell script, Boto3, pandas, Elasticsearch, certifi, PySpark, Pig, Hive, Oozie, JSON.
AWS Big Data Database Backups Elasticsearch ETL Hadoop Hadoop Developer HDFS Hive Java JSON Machine Learning MapReduce Maven Metadata MongoDB Oozie Pig Python Shell Scripts Spark SQL SQL Server Tableau
Remove Skill
Hadoop Developer
Information Technology
Jan 2016 - Jul 2016
Boston, MA
Responsibilities:
• Developed simple to complex MapReduce jobs using Java language for processing and validating the data.
• Developed data pipeline using Sqoop, Spark, MapReduce, and Hive to ingest, transform and analyze, customer behavioral data.
• Exported analyzed data to relational databases using SQOOP for visualization to generate reports for the BI team.
• Implemented Spark using python and Spark SQL for faster processing of data and algorithms for real time analysis in Spark.
• Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data.
• Used the Spark - Cassandra Connector to load data to and from Cassandra. Real time streaming the data using Spark with Kafka.
• Developing Kafka producers and consumers in java and integrating with apache storm and ingesting data into HDFS and HBase by implementing the rules in storm.
• Built a prototype for real time analysis using Spark streaming and Kafka.
• Involved in moving all log files generated from various sources to HDFS for further processing through Flume.
• Involved in creating Hive tables and working on them using HiveQL and perform data analysis using Hive and Pig.
• Developed workflow in Oozie to manage and schedule jobs on Hadoop cluster to trigger daily, weekly and monthly batch cycles.
• Experience in job workflow scheduling and monitoring tools like Oozie and Zookeeper.
• Expertise in extending Hive and Pig core functionalities by writing custom User Defined Functions (UDF).
• Used IMPALA to pull the data from Hive tables.
• Worked on Apache Flume for collecting and aggregating huge amount of log data and stored it on HDFS for doing further analysis.
• Create and develop an End to End Data Ingestion on to Hadoop.
• Involved in architecture and design of distributed time-series database platform using NOSQL technologies like Hadoop/HBase, Zookeeper.
• Integrated NoSQL database like HBase with Map Reduce to move bulk amount of data into HBase.
• Efficiently put and fetched data to/from HBase by writing MapReduce job. Environment: Hadoop, Kafka, Spark, Sqoop, Spark SQL, Spark-Streaming, Hive, Scala, pig, NoSQL, Impala, Oozie, Scala, HBase, Zookeeper. Hadoop developer
Apache Data Analysis Data Integration Flume Hadoop Hadoop Developer Hbase HDFS Hive impala Java MapReduce MongoDB Oozie Pig Python Spark SQL Sqoop
Remove Skill
Hadoop Developer
Information Technology
Jan 2016 - Jul 2016
Boston, MA
Responsibilities:
• Developed simple to complex MapReduce jobs using Java language for processing and validating the data.
• Developed data pipeline using Sqoop, Spark, MapReduce, and Hive to ingest, transform and analyze, customer behavioral data.
• Exported analyzed data to relational databases using SQOOP for visualization to generate reports for the BI team.
• Implemented Spark using python and Spark SQL for faster processing of data and algorithms for real time analysis in Spark.
• Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data.
• Used the Spark - Cassandra Connector to load data to and from Cassandra. Real time streaming the data using Spark with Kafka.
• Developing Kafka producers and consumers in java and integrating with apache storm and ingesting data into HDFS and HBase by implementing the rules in storm.
• Built a prototype for real time analysis using Spark streaming and Kafka.
• Involved in moving all log files generated from various sources to HDFS for further processing through Flume.
• Involved in creating Hive tables and working on them using HiveQL and perform data analysis using Hive and Pig.
• Developed workflow in Oozie to manage and schedule jobs on Hadoop cluster to trigger daily, weekly and monthly batch cycles.
• Experience in job workflow scheduling and monitoring tools like Oozie and Zookeeper.
• Expertise in extending Hive and Pig core functionalities by writing custom User Defined Functions (UDF).
• Used IMPALA to pull the data from Hive tables.
• Worked on Apache Flume for collecting and aggregating huge amount of log data and stored it on HDFS for doing further analysis.
• Create and develop an End to End Data Ingestion on to Hadoop.
• Involved in architecture and design of distributed time-series database platform using NOSQL technologies like Hadoop/HBase, Zookeeper.
• Integrated NoSQL database like HBase with Map Reduce to move bulk amount of data into HBase.
• Efficiently put and fetched data to/from HBase by writing MapReduce job. Environment: Hadoop, Kafka, Spark, Sqoop, Spark SQL, Spark-Streaming, Hive, Scala, pig, NoSQL, Impala, Oozie, Scala, HBase, Zookeeper. Hadoop developer
Apache Data Analysis Data Integration Flume Hadoop Hadoop Developer Hbase HDFS Hive impala Java MapReduce MongoDB Oozie Pig Python Spark SQL Sqoop
Remove Skill
Information Technology
Jun 2012 - Jul 2015
IN Responsibilities:
• Identified System Requirements and Developed System Specifications, responsible for high-level design and development of use cases.
• Involved in designing Database Connections using JDBC.
• Organized and participated in meetings with clients and team members.
• Developed web-based Bristow application using J2EE (Spring MVC Framework), POJOs, JSP, JavaScript, HTML, jQuery, Business classes and queries to retrieve data from backend.
• Development of Client-Side Validation techniques using jQuery.
• Worked with Bootstrap to develop responsive web pages.
• Implemented client side and server side data validations using the JavaScript.
• Responsible for customizing data model for new applications by using Hibernate ORM technology. Involved in the implementation of DAO and DTO using spring with Hibernate ORM.
• Implemented Hibernate for the ORM layer in transacting with MySQL database.
• Developed authentication and access control services for the application using Spring LDAP.
• Experience in event - driven applications using AJAX, Object Oriented JavaScript, JSON and XML. Good knowledge on developing asynchronous applications using jQuery. Valuable experience with Form Validation by Regular Expression, and jQuery Light box.
• Used MySQL for the EIS layer.
• Involved in design and Development of UI using HTML, JavaScript and CSS.
• Designed and developed various data gathering forms using HTML, CSS, JavaScript, JSP and Servlets.
• Developed user interface modules using JSP, Servlets and MVC framework.
• Experience in implementing of J2EE standards, MVC2 architecture using Struts Framework.
• Developed J2EE components on Eclipse IDE.
• Used JDBC to invoke Stored Procedures and used JDBC for database connectivity to SQL.
• Deployed the applications on Tomcat Application Server.
• Developed Web services using Restful and JSON.
• Created Java Beans accessed from JSPs to transfer data across tiers.
• Database Modification using SQL, PL/SQL, Stored procedures, triggers, Views in Oracle9i. Environment: Java, JSP, Servlets, JDBC, Eclipse, Web services, Spring 3.0, Hibernate 3.0, MySQL, JSON, Struts, HTML, JavaScript, CSS
No skills were added
Remove Skill
Information Technology
Jun 2012 - Jul 2015
IN Responsibilities:
• Identified System Requirements and Developed System Specifications, responsible for high-level design and development of use cases.
• Involved in designing Database Connections using JDBC.
• Organized and participated in meetings with clients and team members.
• Developed web-based Bristow application using J2EE (Spring MVC Framework), POJOs, JSP, JavaScript, HTML, jQuery, Business classes and queries to retrieve data from backend.
• Development of Client-Side Validation techniques using jQuery.
• Worked with Bootstrap to develop responsive web pages.
• Implemented client side and server side data validations using the JavaScript.
• Responsible for customizing data model for new applications by using Hibernate ORM technology. Involved in the implementation of DAO and DTO using spring with Hibernate ORM.
• Implemented Hibernate for the ORM layer in transacting with MySQL database.
• Developed authentication and access control services for the application using Spring LDAP.
• Experience in event - driven applications using AJAX, Object Oriented JavaScript, JSON and XML. Good knowledge on developing asynchronous applications using jQuery. Valuable experience with Form Validation by Regular Expression, and jQuery Light box.
• Used MySQL for the EIS layer.
• Involved in design and Development of UI using HTML, JavaScript and CSS.
• Designed and developed various data gathering forms using HTML, CSS, JavaScript, JSP and Servlets.
• Developed user interface modules using JSP, Servlets and MVC framework.
• Experience in implementing of J2EE standards, MVC2 architecture using Struts Framework.
• Developed J2EE components on Eclipse IDE.
• Used JDBC to invoke Stored Procedures and used JDBC for database connectivity to SQL.
• Deployed the applications on Tomcat Application Server.
• Developed Web services using Restful and JSON.
• Created Java Beans accessed from JSPs to transfer data across tiers.
• Database Modification using SQL, PL/SQL, Stored procedures, triggers, Views in Oracle9i. Environment: Java, JSP, Servlets, JDBC, Eclipse, Web services, Spring 3.0, Hibernate 3.0, MySQL, JSON, Struts, HTML, JavaScript, CSS
No skills were added
Remove Skill
Edit Skills
Non-cloudteam Skill
Education
Master's in Masters
Texas, 2015 - 2016
Minor: bachelors
Skills
Hadoop
2021
4
Hive
2021
4
Java
2021
4
MongoDB
2021
4
Pig
2021
4
Python
2021
4
Spark
2021
4
SQL
2021
4
AWS
2021
3
Big Data
2021
3
JSON
2021
3
Data Analysis
2021
2
Hadoop Developer
2018
2
HDFS
2018
2
MapReduce
2018
2
Oozie
2018
2
Agile Methodology
2021
1
Apache
2018
1
API Development
2021
1
Application Development
2017
1
AWS CloudFormation
2021
1
AWS S3
2021
1
Cloudwatch
2021
1
Continuous Deployment
2021
1
Continuous Integration
2021
1
Data Engineering
2021
1
Data Integration
2018
1
Database Backups
2017
1
Elasticsearch
2017
1
ETL
2017
1
Ganglia
2021
1
Hbase
2018
1
Machine Learning
2017
1
Maven
2017
1
Metadata
2017
1
Performance Tuning
2021
1
Shell Scripts
2017
1
SQL Server
2017
1
Sqoop
2018
1
Systems Engineering
2021
1
Tableau
2017
1
UNIX
2021
1
XML
2021
1
Apache NiFi
0
1
AWS EC2
0
1
Data Warehousing
0
1
Design Patterns
0
1
Flume
2016
1
impala
2016
1
Jenkins
0
1
Linux
0
1
Netezza
0
1
Oracle
2018
1
Scripting
0
1
SVN
0
1
Teradata
2018
1
WebServices
0
1