Build A Near Real-Time Twitter Streaming Analytical Pipeline From Scratch Using Spark and AWS

Posted on Aug 20, 2017

 

Introduction:
This blog is about the technical implementation of streaming analysis pipeline in our capstone project: Creating a Real-time Streaming Analytics Platform to manage social media marketing campaign. It details the process from establishing data streaming producer on AWS EC2 to sending the results of analysis to AWS RDS.

Components:

The backend pipeline consisted 4 components:
1. Streaming data producer (Twitter streaming API deployed on AWS EC2):
2. Streaming pipeline (AWS Kinesis): Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.
3. Analytical engine: Apache Spark (Batch processing)
4. Data warehouse: AWS RDS

 

1.Streaming data producer:

 

Step 1.1. Create an EC2 spot instance: Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. You create your own spot instance by following this video. We used m4.xlarge instance type on Ubuntu server in us-east-2 (Ohio) zone. (The market price is more stable in Ohio zone, around $0.027/hr). To be mentioned, always backup your files on spot instance as it will shut down when the market price overrides your maximum bid price. If you don’t want your service interrupted, a normal EC2 instance might be a better choice for you.

Step 1.2. Log in to your EC2 instance:

1.2.1. Once your EC2 instance is ready and running, you can download your keyname.pem to your local and move to a specific folder. (Keep your keyname.pem private and safe.) Navigate to that folder and change the pem file permission to 'Allow read and write'.

1.2.2. To launch your EC2 instance, execute the following bash code:

You need to find your instance information in EC2 management consoles and replace the [email protected] with your Public DNS (IPV4) information.

 

Step 1.3. Setup EC2 Environment:

1.3.1. Install JAVA:

Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It’s easy to run locally on one machine.All you need is to have java installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation. https://gist.github.com/weizhou2273/f89c71f24f1474ec0e546a5b34a72597
After installing successfully, you should see the following information by running java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

1.3.2. Install Python:

The data producer is based on python language.

1.3.3. Install Python packages:

We also used google api to analyze twitter sentiment in real time. You can follow this link to install Gcloud.

 

Step 1.4. Create / Delete AWS Kinesis streaming pipeline using python:

1.4.1. Create and launch AWS Kinesis pipeline:

The following python code will establish a Kinesis pipeline based on the search input. The streaming data will be saved in your S3 bucket (my S3 bucket name is ‘project4capstone3’) under the folder named after search input.

 

1.4.2. Delete AWS Kinesis pipeline:

Running bash code: source delete_stream.py stream_name to terminate kinesis pipeline.

 

Step 1.5. Establish Data producer:
After setting up the pipeline, it’s time to configure the data producer. We used Twitter Streaming API to capture real-time Twitter. To get a better idea of how to use Streaming API and Tweepy, you can check here .

Running your data_producer.py by executing python data_producer.py in your terminal.

Step 1.6 Check the condition of your kinesis pipeline
You everything works well, you should see the following line chart under your kinesis firehose management console.

 

2. Spark analysis (Batch processing)

Step 2.1. Configure your spark environment on EC2:

2.1.1. Download and unzip Apache Spark:

 

2.1.2. Initiate Pyspark:
Run the following bash code in terminal to test your spark.

If successfully initiate pyspark, you should be able to see the following message:

 

Step 2.2. Establish RDS instance (Data Warehouse):

 

2.2.1. Create a RDS instance by following the steps in this linkIf successfully launch RDS instance, you should be able to see the following information.

2.2.2. Connect RDS to  Mysql workbench: You will need to use endpoint, as well as a username and password to connect RDS to mysql workbench. Mysql workbench can be downloaded via this link.

 

 

2.2.3 Download and unzip Mysql JDBC driver:

Mysql JDBC driver is necessary to save you spark analysis result to RDS.

 

Step 2.3. Run you pyspark script:

2.3.1. The following script shows how spark reads data from S3, analyze data and save result to RDS.

 

2.3.2. Execute your pyspark script:

 

Step 2.4. Schedule Apache Spark using crontab:
Cron is a system daemon used to execute desired tasks (in the background) at designated times. A crontab file is a simple text file containing a list of commands meant to be run at specified times. It is edited using the crontab command. The commands in the crontab file (and their run times) are checked by the cron daemon, which executes them in the system background.

Running the following crontab command to schedule the spark task:

Congratulations! Now your first streaming analytical pipeline is finished!

 

3. Test your pipeline:

 

9. Acknowledgement

Teamwork acknowledgement: 

Guidance from Shu Yan.

About Author

William Zhou

William Zhou is quantitative thinker and deep learning enthusiast with a strong background in healthcare. After graduating from Soochow University in Pharmaceutical science, he obtained a MHA from Columbia University in Healthcare management. In the following 2 years,...
View all posts by William Zhou >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI