Build up a near real time Twitter streaming analytical pipeline from scratch using Spark and AWS

Avatar
Posted on Aug 20, 2017

 

Introduction:
This blog is about the technical implementation of streaming analysis pipeline in our capstone project: Creating a Real-time Streaming Analytics Platform to manage social media marketing campaign. It details the process from establishing data streaming producer on AWS EC2 to sending the results of analysis to AWS RDS.

Components:

The backend pipeline consisted 4 components:
1. Streaming data producer (Twitter streaming API deployed on AWS EC2):
2. Streaming pipeline (AWS Kinesis): Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.
3. Analytical engine: Apache Spark (Batch processing)
4. Data warehouse: AWS RDS

 

1.Streaming data producer:

 

Step 1.1. Create an EC2 spot instance: Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. You create your own spot instance by following this video. We used m4.xlarge instance type on Ubuntu server in us-east-2 (Ohio) zone. (The market price is more stable in Ohio zone, around $0.027/hr). To be mentioned, always backup your files on spot instance as it will shut down when the market price overrides your maximum bid price. If you don’t want your service interrupted, a normal EC2 instance might be a better choice for you.

Step 1.2. Log in to your EC2 instance:

1.2.1. Once your EC2 instance is ready and running, you can download your keyname.pem to your local and move to a specific folder. (Keep your keyname.pem private and safe.) Navigate to that folder and change the pem file permission to 'Allow read and write'.

1.2.2. To launch your EC2 instance, execute the following bash code:

You need to find your instance information in EC2 management consoles and replace the [email protected] with your Public DNS (IPV4) information.

 

Step 1.3. Setup EC2 Environment:

1.3.1. Install JAVA:

Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It’s easy to run locally on one machine.All you need is to have java installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation. https://gist.github.com/weizhou2273/f89c71f24f1474ec0e546a5b34a72597
After installing successfully, you should see the following information by running java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

1.3.2. Install Python:

The data producer is based on python language.

1.3.3. Install Python packages:

We also used google api to analyze twitter sentiment in real time. You can follow this link to install Gcloud.

 

Step 1.4. Create / Delete AWS Kinesis streaming pipeline using python:

1.4.1. Create and launch AWS Kinesis pipeline:

The following python code will establish a Kinesis pipeline based on the search input. The streaming data will be saved in your S3 bucket (my S3 bucket name is ‘project4capstone3’) under the folder named after search input.

 

1.4.2. Delete AWS Kinesis pipeline:

Running bash code: source delete_stream.py stream_name to terminate kinesis pipeline.

 

Step 1.5. Establish Data producer:
After setting up the pipeline, it’s time to configure the data producer. We used Twitter Streaming API to capture real-time Twitter. To get a better idea of how to use Streaming API and Tweepy, you can check here .

Running your data_producer.py by executing python data_producer.py in your terminal.

Step 1.6 Check the condition of your kinesis pipeline
You everything works well, you should see the following line chart under your kinesis firehose management console.

 

2. Spark analysis (Batch processing)

Step 2.1. Configure your spark environment on EC2:

2.1.1. Download and unzip Apache Spark:

 

2.1.2. Initiate Pyspark:
Run the following bash code in terminal to test your spark.

If successfully initiate pyspark, you should be able to see the following message:

 

Step 2.2. Establish RDS instance (Data Warehouse):

 

2.2.1. Create a RDS instance by following the steps in this linkIf successfully launch RDS instance, you should be able to see the following information.

2.2.2. Connect RDS to  Mysql workbench: You will need to use endpoint, as well as a username and password to connect RDS to mysql workbench. Mysql workbench can be downloaded via this link.

 

 

2.2.3 Download and unzip Mysql JDBC driver:

Mysql JDBC driver is necessary to save you spark analysis result to RDS.

 

Step 2.3. Run you pyspark script:

2.3.1. The following script shows how spark reads data from S3, analyze data and save result to RDS.

 

2.3.2. Execute your pyspark script:

 

Step 2.4. Schedule Apache Spark using crontab:
Cron is a system daemon used to execute desired tasks (in the background) at designated times. A crontab file is a simple text file containing a list of commands meant to be run at specified times. It is edited using the crontab command. The commands in the crontab file (and their run times) are checked by the cron daemon, which executes them in the system background.

Running the following crontab command to schedule the spark task:

Congratulations! Now your first streaming analytical pipeline is finished!

 

3. Test your pipeline:

 

9. Acknowledgement

Teamwork acknowledgement: 

Guidance from Shu Yan.

About Author

Avatar

William Zhou

William Zhou is quantitative thinker and deep learning enthusiast with a strong background in healthcare. After graduating from Soochow University in Pharmaceutical science, he obtained a MHA from Columbia University in Healthcare management. In the following 2 years,...
View all posts by William Zhou >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Big Data Book Launch Book-Signing bootcamp Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Industry Experts Job Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest recommendation recommendation system regression Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Tableau TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp