MISSION 127

Project: Spark Installation and Jupyter Notebook Integration

At the beginning of this path, you learned that data can be stored inside pandas dataframes and then you learned data can also be stored inside a database. While these data storage structures are ideal for some data, they're not ideal for massive amounts of data, as the data processing workload can get really slow with big datasets using these tools.

In the last lesson, we introduced the Spark cluster computing framework and explored some basic PySpark methods, all within the Dataquest interface. 

In this project, we'll walk through how to install Spark on your own computer and integrate PySpark with Jupyter Notebook.  We can use Spark in two modes:

  • Local mode - The entire Spark application runs on a single machine. Local mode is what you'll use to prototype Spark code on your own computer. It's also easier to set up.
  • Cluster mode - The Spark application runs across multiple machines. Cluster mode is what you'll use when you want to run your Spark application across multiple machines in a cloud environment like Amazon Web Services, Microsoft Azure, or Digital Ocean.

For this project, we'll walk through the instructions for installing Spark in local mode. Afterwards, you'll be able to use Spark with Jupyter notebook for personal projects or your technical interviews and land a job in big data!

Objectives

  • Learn how to install Spark and Pyspark.
  • Learn to integrate PySpark with Jupyter Notebook.

Mission Outline

1. Introduction
2. Java
3. Spark
4. PySpark Shell
5. Jupyter Notebook
6. Testing your Installation

spark-map-reduce

Course Info:

Intermediate

The median completion time for this course is 6 hours. View Details

This course requires a premium subscription and includes five missions, and one installation tutorial.  It is the 31st course in the Data Scientist In Python path.

START LEARNING FREE

Take a Look Inside