Build A Near Real-Time Twitter Streaming Analytical Pipeline From Scratch Using Spark and AWS
Introduction:
This blog is about the technical implementation of streaming analysis pipeline in our capstone project: Creating a Real-time Streaming Analytics Platform to manage social media marketing campaign. It details the process from establishing data streaming producer on AWS EC2 to sending the results of analysis to AWS RDS.
Components:
The backend pipeline consisted 4 components:
1. Streaming data producer (Twitter streaming API deployed on AWS EC2):
2. Streaming pipeline (AWS Kinesis): Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information.
3. Analytical engine: Apache Spark (Batch processing)
4. Data warehouse: AWS RDS
1.Streaming data producer:
Step 1.1. Create an EC2 spot instance: Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. You create your own spot instance by following this video. We used m4.xlarge instance type on Ubuntu server in us-east-2 (Ohio) zone. (The market price is more stable in Ohio zone, around $0.027/hr). To be mentioned, always backup your files on spot instance as it will shut down when the market price overrides your maximum bid price. If you donโt want your service interrupted, a normal EC2 instance might be a better choice for you.
Step 1.2. Log in to your EC2 instance:
1.2.1. Once your EC2 instance is ready and running, you can download your keyname.pem to your local and move to a specific folder. (Keep your keyname.pem private and safe.) Navigate to that folder and change the pem file permission to 'Allow read and write'.
1.2.2. To launch your EC2 instance, execute the following bash code:
You need to find your instance information in EC2 management consoles and replace the ubuntu@ec2-XX-XX-XXX-XXX.us-east-X.compute.amazonaws.com
with your Public DNS (IPV4) information.
Step 1.3. Setup EC2 Environment:
1.3.1. Install JAVA:
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). Itโs easy to run locally on one machine.All you need is to have java installed on your system PATH, or the JAVA_HOME environment variable pointing to a Java installation. https://gist.github.com/weizhou2273/f89c71f24f1474ec0e546a5b34a72597
After installing successfully, you should see the following information by running java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
1.3.2. Install Python:
The data producer is based on python language.
1.3.3. Install Python packages:
We also used google api to analyze twitter sentiment in real time. You can follow this link to install Gcloud.
Step 1.4. Create / Delete AWS Kinesis streaming pipeline using python:
1.4.1. Create and launch AWS Kinesis pipeline:
The following python code will establish a Kinesis pipeline based on the search input. The streaming data will be saved in your S3 bucket (my S3 bucket name is โproject4capstone3โ) under the folder named after search input.
1.4.2. Delete AWS Kinesis pipeline:
Running bash code: source delete_stream.py stream_name
to terminate kinesis pipeline.
Step 1.5. Establish Data producer:
After setting up the pipeline, itโs time to configure the data producer. We used Twitter Streaming API to capture real-time Twitter. To get a better idea of how to use Streaming API and Tweepy, you can check here .
Running your data_producer.py by executing python data_producer.py
in your terminal.
Step 1.6 Check the condition of your kinesis pipeline
You everything works well, you should see the following line chart under your kinesis firehose management console.
2. Spark analysis (Batch processing)
Step 2.1. Configure your spark environment on EC2:
2.1.1. Download and unzip Apache Spark:
2.1.2. Initiate Pyspark:
Run the following bash code in terminal to test your spark.
If successfully initiate pyspark, you should be able to see the following message:
Step 2.2. Establish RDS instance (Data Warehouse):
2.2.1. Create a RDS instance by following the steps in this link. If successfully launch RDS instance, you should be able to see the following information.
2.2.2. Connect RDS to Mysql workbench: You will need to use endpoint, as well as a username and password to connect RDS to mysql workbench. Mysql workbench can be downloaded via this link.
2.2.3 Download and unzip Mysql JDBC driver:
Mysql JDBC driver is necessary to save you spark analysis result to RDS.
Step 2.3. Run you pyspark script:
2.3.1. The following script shows how spark reads data from S3, analyze data and save result to RDS.
2.3.2. Execute your pyspark script:
Step 2.4. Schedule Apache Spark using crontab:
Cron is a system daemon used to execute desired tasks (in the background) at designated times. A crontab file is a simple text file containing a list of commands meant to be run at specified times. It is edited using the crontab command. The commands in the crontab file (and their run times) are checked by the cron daemon, which executes them in the system background.
Running the following crontab command to schedule the spark task:
Congratulations! Now your first streaming analytical pipeline is finished!
3. Test your pipeline:
9. Acknowledgement
Teamwork acknowledgement:
Guidance from Shu Yan.