This is a 6-week evening program providing a hands-on introduction to the Hadoop and Spark ecosystem of Big Data technologies. The course will cover these key components of Apache Hadoop: HDFS, MapReduce with streaming, Hive, and Spark. Programming will be done in Python. The course will begin with a review of Python concepts needed for our examples. The course format is interactive. Students will need to bring laptops to class. We will do our work on AWS (Amazon Web Services); instructions will be provided ahead of time on how to connect to AWS and obtain an account.
What is Hadoop?
Hadoop is a set of open-source programs running in computer clusters that simplify the handling of large amounts of data. Originally, Hadoop consisted of a distributed file system tuned for large data sets and an implementation of the MapReduce parallelism paradigm, but has expanded in many ways. It now includes database systems, languages for parallelism, libraries for machine learning, its own job scheduler, and much more. Furthermore, MapReduce is no longer the only parallelism framework; Spark is an increasingly popular alternative. In summary, Hadoop is a very popular and rapidly growing set of cluster computing solutions, which is becoming an essential tool for data scientists.
To get the most out of the class, you need to be familiar with Linux file systems, Linux command line interface (CLI) and the basic linux commands such as cd, ls, cp, etc. You also need to have basic programming skills in Python, and are comfortable with functional programming style, for example, how to use map() function to split a list of strings into a nested list. Object oriented programming (OOP) in python is not required.
Certificates are awarded at the end of the program at the satisfactory completion of the course.
Students are evaluated on a pass/fail basis for their performance on the required homework and final project (where applicable). Students who complete 80% of the homework and attend a minimum of 85% of all classes are eligible for the certificate of completion.
Unit 1 – Introduction: Hadoop, MapReduce, Python
- Overview of Big Data and the Hadoop ecosystem
- The concept of MapReduce
- HDFS – Hadoop Distributed File System
- Python for MapReduce
Unit 2 – MapReduce
- More Python for MapReduce
- Implementing MapReduce with Python streaming
Unit 3 – Hive: A database for Big Data
- Hive concepts, Hive query language (HiveQL)
- User-defined functions in Python (using streaming)
- Accessing Hive from Python
Unit 4 – Pig: A Platform for Analyzing Large Datasets Using MapReduce
- Intro to Apache Pig
- Data Types in Pig
- Pig Latin
- Compiling Pig to MapReduce
Unit 5 – Spark
- Intro to Spark using PySpark
- Basic Spark concepts: RDDs, transformations, actions
- PairRDDs and aggregating transformations
- Advanced Spark: partitions; shared variables
Unit 6 – Project Week
- Case studies/Final projects