Analysis And Prediction Of Starbucks Store Per Capita

Randy Pantinople
Posted on Aug 15, 2020

Starbucks is the largest coffeehouse company in the world, with 27,339 retail locations in 5,469 cities across 73 countries.

The map shows countries with Starbucks locations. Darker shade means more stores. United States has the most locations with 13,608 stores. It is followed by China with 2,734 stores and Canada with 1,468 stores.

We are going to analyze Starbucks locations around the world and create a model to predict a store per capita for countries that doesn't have Starbucks stores. I used R to write the code and shiny dashboard for the visualization. Data from kaggle and world bank were merged for the analysis.

Please check the shiny dashboard here.

Exploration

We will begin by exploring some countries with Starbucks and what city has the most locations. The type of ownership is important in Starbucks international expansion. Joint venture strategy helps them start a business in a new country with the help of local partners that know the market well.

United States
- 13,608 stores
- 3,239 cities
- 41 stores per 1 million people
Japan
- 1,2377 stores
- 366 cities
- 10 stores per 1 million people
United Kingdom
- 901 stores
- 348 cities
- 13 stores per 1 million people

ANALYSIS

Countries with Starbucks locations have higher GDP per capita, higher population, median age of 32 and scored well on business performance index. These are the variables we are going to use to predict the number of stores per capita for each country.

Distribution

Since some observations have extreme values, we decided to use log transformation for the values used. Most of the distributions are nearly normal.

Correlation

We will see if there is a linear relationship between our target variable and predictor variables.

Population and ease of doing business scores are highly correlated with number of stores.

GDP per capita and median income have low correlation with number of stores.

Collinearity

We will use a pairwise graph to check the correlation among predictor variables. Ideally, we would like to have low-to-no multicollinearity. Population and median age have high correlation. We will monitor these variables as we select our model.

MODEL SELECTION

We check the model using backward elimination and forward selection using adjusted r-squared. By removing the median age and business scores, it give us the highest adjusted r-squared of 67%

Model:   log(num_store) = -24.7 +  0.89*log(pop) + 1.26*log(gdp)

 

DIAGNOSTICS PLOTS

Using plots, we will check whether our model violates the following: linearity, constant variance, independence and normality. The plots show no major abnormalities with the residuals so it doesn't violate these assumptions.

PREDICTION

We will use data from countries that don't have Starbucks stores.  An exponential function will be used to adjust the log transformation used with the variables.

Italy
- GDP per capita : $42,412.66
- Population: 60,479,424
- Prediction: 6 stores per 1 million people
Israel
- GDP per capita : $40,161.92
- Population: 8,627,444
- Prediction: 4 stores per 1 million people
Myanmar
- GDP per capita : $5142.15
- Population: 54,335,948
- Prediction: 2 stores per 1 million people

For simplicity I used adjusted r-squared to choose the feature of the model. In the future, I would like to use either Akaike Information Criterion(AIC) or Bayesian Information Criterion (BIC) to maximize the likelihood of the outcome. I would also like to explore other types of transformation such as Box-Cox transformation.

About Author

Randy Pantinople

Randy Pantinople

Randy was a high school math and physics teacher for 16 years. He got his masters degree in Physics Education at the University of Southeastern Philippines. His passion about trends, predictions, and data driven decisions led him to...
View all posts by Randy Pantinople >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp