NYC Data Science Academy| Blog
Bootcamps
Lifetime Job Support Available Financing Available
Bootcamps
Data Science with Machine Learning Flagship 🏆 Data Analytics Bootcamp Artificial Intelligence Bootcamp New Release 🎉
Free Lesson
Intro to Data Science New Release 🎉
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook Graduate Outcomes Must See 🔥
Alumni
Success Stories Testimonials Alumni Directory Alumni Exclusive Study Program
Courses
View Bundled Courses
Financing Available
Bootcamp Prep Popular 🔥 Data Science Mastery Data Science Launchpad with Python View AI Courses Generative AI for Everyone New 🎉 Generative AI for Finance New 🎉 Generative AI for Marketing New 🎉
Bundle Up
Learn More and Save More
Combination of data science courses.
View Data Science Courses
Beginner
Introductory Python
Intermediate
Data Science Python: Data Analysis and Visualization Popular 🔥 Data Science R: Data Analysis and Visualization
Advanced
Data Science Python: Machine Learning Popular 🔥 Data Science R: Machine Learning Designing and Implementing Production MLOps New 🎉 Natural Language Processing for Production (NLP) New 🎉
Find Inspiration
Get Course Recommendation Must Try 💎 An Ultimate Guide to Become a Data Scientist
For Companies
For Companies
Corporate Offerings Hiring Partners Candidate Portfolio Hire Our Graduates
Students Work
Students Work
All Posts Capstone Data Visualization Machine Learning Python Projects R Projects
Tutorials
About
About
About Us Accreditation Contact Us Join Us FAQ Webinars Subscription An Ultimate Guide to
Become a Data Scientist
    Login
NYC Data Science Acedemy
Bootcamps
Courses
Students Work
About
Bootcamps
Bootcamps
Data Science with Machine Learning Flagship
Data Analytics Bootcamp
Artificial Intelligence Bootcamp New Release 🎉
Free Lessons
Intro to Data Science New Release 🎉
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook
Graduate Outcomes Must See 🔥
Alumni
Success Stories
Testimonials
Alumni Directory
Alumni Exclusive Study Program
Courses
Bundles
financing available
View All Bundles
Bootcamp Prep
Data Science Mastery
Data Science Launchpad with Python NEW!
View AI Courses
Generative AI for Everyone
Generative AI for Finance
Generative AI for Marketing
View Data Science Courses
View All Professional Development Courses
Beginner
Introductory Python
Intermediate
Python: Data Analysis and Visualization
R: Data Analysis and Visualization
Advanced
Python: Machine Learning
R: Machine Learning
Designing and Implementing Production MLOps
Natural Language Processing for Production (NLP)
For Companies
Corporate Offerings
Hiring Partners
Candidate Portfolio
Hire Our Graduates
Students Work
All Posts
Capstone
Data Visualization
Machine Learning
Python Projects
R Projects
About
Accreditation
About Us
Contact Us
Join Us
FAQ
Webinars
Subscription
An Ultimate Guide to Become a Data Scientist
Tutorials
Data Analytics
  • Learn Pandas
  • Learn NumPy
  • Learn SciPy
  • Learn Matplotlib
Machine Learning
  • Boosting
  • Random Forest
  • Linear Regression
  • Decision Tree
  • PCA
Interview by Companies
  • JPMC
  • Google
  • Facebook
Artificial Intelligence
  • Learn Generative AI
  • Learn ChatGPT-3.5
  • Learn ChatGPT-4
  • Learn Google Bard
Coding
  • Learn Python
  • Learn SQL
  • Learn MySQL
  • Learn NoSQL
  • Learn PySpark
  • Learn PyTorch
Interview Questions
  • Python Hard
  • R Easy
  • R Hard
  • SQL Easy
  • SQL Hard
  • Python Easy
Data Science Blog > R > Real time data search and analysis with API in Shiny

Real time data search and analysis with API in Shiny

Wansang Lim
Posted on Feb 29, 2016

Contributed by Wansang Lim. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his project – R Shiny.

apis-for-marketers
Trends of API
Recently, API become a hot issue and popular in Internet related technology. Programmable Web, a site that tracks more than 13,000 APIs, lists New York Times, Google Maps, Twitter, YouTube, Flickr and Amazon Product Advertising as some of the the most popular APIs. More and more applications are adopting an API-first approach. APIs are powerful to use because it creates a common language that everyone can use and understand. If we use API, we do not need a pre-built platform to analyze big data (https://blog.cloud-elements.com/using-apis-data-science).

API for Data Scientist
API can save a lot of time for Data Scientist.
When we think data scientists, some of us of think statistician nerds. But if you look what a data scientist does on a daily basis, it is 20% statistics and 80% data wrangling (https://apigee.com/about/blog/developer/developers-and-data-scientists-enterprise-force-multipliers). Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets (https://mobile.nytimes.com/2014/08/18/technology/for-big-data-scientists-hurdle-to-insights-is-janitor-work.html?referrer=).

Abstract of this project
For this project, I use NY times API.It provides many APIs like Article Search API,Books API ,The Congress API and so on.Among them, I chose article search API to check hit which is the number of article which have the search word. And it give us abstract , date and url of the article. These are just the java script. It needs be be converted to regular column data.

For web page scraping, I scraped billboard sings which have 2 column and one hidden column. It is reorganized to compare current and previous ranking.

I used API as a kind of data conversion tool. For example, the leaflet for shiny needs coordinate for location. But most data we can get is just address like "500 8th Ave #905, New York, NY 10018". I used google geocoding API (https://developers.google.com/maps/documentation/geocoding/intro). I tried government job search API (https://search.digitalgov.gov/developer/jobs.html). It provide job location which is just address. I converted another API to obtain location data.

Bill Board Web Scraping
billToTable
Fig 1. Bill board conversion to shiny table.

The bill board web site looks like left side on Fig 1. It is scraped by python. It looks like has just two columns which are singer and song name. However, the last week ranking is hidden the web table.It seems that Java Script calculate it with current ranking and convert it as up or down arrow. The ranking difference column is made by subtracting last ranking from current ranking. So it can be sorted in shiny.

Normalization
normal
Fig 2. Normalization

The API is designed to call 10 signals from ending year to ending year - 10. It let us compare hit among 10 years(Fig 2 a,b,c). When I search china, it shows dramatic increase in 2015 and 2016(Fig 2 a). When I search "about" which is mostly commonly used word, I have same result(Fig 2 b). So I decided to normalize. After normalize, the hit frequency of china looks a lot better(Fig 2 c).

Actual Search with Justin Beiber
justinNormal
Fig 3. Actual search with Justin Biever (a) not normalized (b) normalized

When I searched "Justin Biever" which is in Fig 1, his hit keep increase in unnormalized graph(Fig 3 a), but in normalized graph(Fig 3 b), his popularity become less in recent years.

Detailed Information of Justin Bieber
monthTable
Fig 4. (a)Hit by month (b)article abstract by year and month

This API app searched 10 years at one time. It shows that there is almost no hit before 2009. So I guess he began to spot light around 2010(Fig 3). In 2010 March and April, he begin to got some hit (Fig 4 a). I searched the actual article. I found his article 2010 April (Fig 4 b). The content is "The 16-year-old singer may not seem like a dangerous figure, but make no mistake: his public appearances can be battlegrounds." .It seems that at that time, he was not so known to every one.

Data conversion with two API
dataConversion
Fig 5. Data conversion from address to coordinates

I searched data related job in government API. It send me address data. I connected to google geocoder API to convert in real time. The converted data is in red circle. The each address is converted in while loop.

ui.R file

library(shiny)


shinyUI(pageWithSidebar(
headerPanel("NY Times API "),

sidebarPanel(
conditionalPanel(condition="input.conditionedPanels==1",
radioButtons("radio", label = h3("Select "),
choices = list("Original Hit" = 1, "Hit normalized" = 2,"Hit About" = 3),
selected = 1),

textInput("text", label = h4("Search input"), value = ""),
#actionButton("goButton", "Enter"),
numericInput("year", label = h4("Ending Year:"), 2016, min = 1945, max = 2016),
actionButton("goButton", "Enter")
),
conditionalPanel(condition="input.conditionedPanels==2",
numericInput("yearMonth", label = h4("Year for month:"), 2016, min = 1945, max = 2016),
actionButton("yearMonthButton", "Enter")
),
conditionalPanel(condition="input.conditionedPanels==3",
numericInput("yearInterest", label = h4("The Year of interest:"), 2016, min = 1945, max = 2016),
radioButtons("radioMonth", label = h4("Select"),
choices = list("By Year" = 1, "By Month" = 2),
selected = 1),
conditionalPanel(condition = "input.radioMonth== 2",
numericInput("monthInterest", label = h4("The month of interest"), 01, min = 1, max = 12)

),
numericInput("pageInput", label = h4("Type the page"), 01, min = 1, max = 1000),
actionButton("goArticle", "Enter")
),
conditionalPanel(condition="input.conditionedPanels==4",
helpText("This is incoming java script object")
),
conditionalPanel(condition="input.conditionedPanels==5",
helpText("2016/02/27")
),

conditionalPanel(condition="input.conditionedPanels==7",

helpText("Content Panel 5")

),

conditionalPanel(condition="input.conditionedPanels==8",
tags$textarea(id="typeAdd", rows=1, cols=15, ""),
#textInput("typeJob", label = h5("Search job"), value = ""),
actionButton("goLonLat", "Enter")
)

),

mainPanel(
tabsetPanel(
tabPanel("NY Times Article API", value=1, plotOutput("graphHit") ), #textOutput("textAll"), dataTableOutput
tabPanel("NY Times Article API by Month", value=2, plotOutput("graphMonth") ),
tabPanel("NY Times Article Example", value=3, dataTableOutput("articleExam") ),
tabPanel("NY Times raw JavaScript Ojbects", value=4, textOutput("rawAPI") ),
tabPanel("Bill Board Singles 100 2016/02/27", value=5, dataTableOutput("BillBoard")),

#tabPanel("Presidential election finance", value=7, textOutput("test03")),
tabPanel("Job Search", value=8, dataTableOutput("textGeo")),
id = "conditionedPanels"
),# end of tabsetPanel
tags$a(href="https://github.com/nycdatasci/bootcamp004_project/blob/master/Project3-WebScraping/WansagLim/prac05/ui.R", "ur.R Link"),
tags$br(),
tags$a(href="https://github.com/nycdatasci/bootcamp004_project/blob/master/Project3-WebScraping/WansagLim/prac05/server.R", "server.R Link")
)
))

server.R file

library(shiny)
library(DT)
library(jsonlite)
library(ggplot2)

shinyServer(function(input, output, session) {

articleSearch <- reactive(input$text)

eventReactive(input$goButton,{ input$text})

year10 <- reactive(input$year)

eventReactive(input$yearButton,{input$year} )

yearInterest <- reactive(input$yearInterest)
yearMonth <- reactive(input$yearMonth)
monthInterest <- reactive(input$monthInterest)
pageInput <- reactive(input$pageInput)
addressInput <- reactive(input$typeAdd)

#renderPrint, renderPlot
output$graphHit <-renderPlot({
input$goButton
isolate({
articleSearch <- articleSearch()
year10 <- year10()
year09 <- year10() - 1
year08 <- year10() - 2
year07 <- year10() - 3
year06 <- year10() - 4
year05 <- year10() - 5
year04 <- year10() - 6
year03 <- year10() - 7
year02 <- year10() - 8
year01 <- year10() - 9
year00 <- year10() - 10

year10 <- paste(year10, "0217", sep = "")
year09 <- paste(year09, "0217", sep = "")
year08 <- paste(year08, "0217", sep = "")
year07 <- paste(year07, "0217", sep = "")
year06 <- paste(year06, "0217", sep = "")
year05 <- paste(year05, "0217", sep = "")
year04 <- paste(year04, "0217", sep = "")
year03 <- paste(year03, "0217", sep = "")
year02 <- paste(year02, "0217", sep = "")
year01 <- paste(year01, "0217", sep = "")
year00 <- paste(year00, "0217", sep = "")

address <- "https://api.nytimes.com/svc/search/v2/articlesearch.json?q=korea&begin_date=20150217&end_date=20160216&sort=newest&api-key=48c66fa2c448eda40826487d4f19a018%3A0%3A71658152"
articleAddress <- "https://api.nytimes.com/svc/search/v2/articlesearch.json?q="

articleKey <- "&api-key=48c66fa2c448eda40826487d4f19a018:0:71658152"
articleAPI10 <- paste(articleAddress, articleSearch,"&begin_date=",year09,"&end_date=",year10,"&sort=newest",articleKey, sep = "")
articleAPI09 <- paste(articleAddress, articleSearch,"&begin_date=",year08,"&end_date=",year09,"&sort=newest",articleKey, sep = "")
articleAPI08 <- paste(articleAddress, articleSearch,"&begin_date=",year07,"&end_date=",year08,"&sort=newest",articleKey, sep = "")
articleAPI07 <- paste(articleAddress, articleSearch,"&begin_date=",year06,"&end_date=",year07,"&sort=newest",articleKey, sep = "")
articleAPI06 <- paste(articleAddress, articleSearch,"&begin_date=",year05,"&end_date=",year06,"&sort=newest",articleKey, sep = "")
articleAPI05 <- paste(articleAddress, articleSearch,"&begin_date=",year04,"&end_date=",year05,"&sort=newest",articleKey, sep = "")
articleAPI04 <- paste(articleAddress, articleSearch,"&begin_date=",year03,"&end_date=",year04,"&sort=newest",articleKey, sep = "")
articleAPI03 <- paste(articleAddress, articleSearch,"&begin_date=",year02,"&end_date=",year03,"&sort=newest",articleKey, sep = "")
articleAPI02 <- paste(articleAddress, articleSearch,"&begin_date=",year01,"&end_date=",year02,"&sort=newest",articleKey, sep = "")
articleAPI01 <- paste(articleAddress, articleSearch,"&begin_date=",year00,"&end_date=",year01,"&sort=newest",articleKey, sep = "")

#normalize "about
aboutAPI10 <- paste(articleAddress, "about","&begin_date=",year09,"&end_date=",year10,"&sort=newest",articleKey, sep = "")
aboutAPI09 <- paste(articleAddress, "about","&begin_date=",year08,"&end_date=",year09,"&sort=newest",articleKey, sep = "")
aboutAPI08 <- paste(articleAddress, "about","&begin_date=",year07,"&end_date=",year08,"&sort=newest",articleKey, sep = "")
aboutAPI07 <- paste(articleAddress, "about","&begin_date=",year06,"&end_date=",year07,"&sort=newest",articleKey, sep = "")
aboutAPI06 <- paste(articleAddress, "about","&begin_date=",year05,"&end_date=",year06,"&sort=newest",articleKey, sep = "")
aboutAPI05 <- paste(articleAddress, "about","&begin_date=",year04,"&end_date=",year05,"&sort=newest",articleKey, sep = "")
aboutAPI04 <- paste(articleAddress, "about","&begin_date=",year03,"&end_date=",year04,"&sort=newest",articleKey, sep = "")
aboutAPI03 <- paste(articleAddress, "about","&begin_date=",year02,"&end_date=",year03,"&sort=newest",articleKey, sep = "")
aboutAPI02 <- paste(articleAddress, "about","&begin_date=",year01,"&end_date=",year02,"&sort=newest",articleKey, sep = "")
aboutAPI01 <- paste(articleAddress, "about","&begin_date=",year00,"&end_date=",year01,"&sort=newest",articleKey, sep = "")

#artiConD
if (articleSearch == 'Enter search term' || articleSearch == "") {
message01 <- "Please, Type your search term"
message01
} else {
articleContent10 <- fromJSON(articleAPI10)
articleContent09 <- fromJSON(articleAPI09)
articleContent08 <- fromJSON(articleAPI08)
articleContent07 <- fromJSON(articleAPI07)
articleContent06 <- fromJSON(articleAPI06)
Sys.sleep(1)
articleContent05 <- fromJSON(articleAPI05)
articleContent04 <- fromJSON(articleAPI04)
articleContent03 <- fromJSON(articleAPI03)
articleContent02 <- fromJSON(articleAPI02)
articleContent01 <- fromJSON(articleAPI01)

Sys.sleep(1)
#normalize "About"
normalContent10 <- fromJSON(aboutAPI10)
normalContent09 <- fromJSON(aboutAPI09)
normalContent08 <- fromJSON(aboutAPI08)
normalContent07 <- fromJSON(aboutAPI07)
normalContent06 <- fromJSON(aboutAPI06)
Sys.sleep(1)
normalContent05 <- fromJSON(aboutAPI05)
normalContent04 <- fromJSON(aboutAPI04)
normalContent03 <- fromJSON(aboutAPI03)
normalContent02 <- fromJSON(aboutAPI02)
normalContent01 <- fromJSON(aboutAPI01)

artiConHit10 <- articleContent10$response$meta$hits
artiConHit09 <- articleContent09$response$meta$hits
artiConHit08 <- articleContent08$response$meta$hits
artiConHit07 <- articleContent07$response$meta$hits
artiConHit06 <- articleContent06$response$meta$hits
artiConHit05 <- articleContent05$response$meta$hits
artiConHit04 <- articleContent04$response$meta$hits
artiConHit03 <- articleContent03$response$meta$hits
artiConHit02 <- articleContent02$response$meta$hits
artiConHit01 <- articleContent01$response$meta$hits

#normalize hit
normalConHit10 <- normalContent10$response$meta$hits
normalConHit09 <- normalContent09$response$meta$hits
normalConHit08 <- normalContent08$response$meta$hits
normalConHit07 <- normalContent07$response$meta$hits
normalConHit06 <- normalContent06$response$meta$hits
normalConHit05 <- normalContent05$response$meta$hits
normalConHit04 <- normalContent04$response$meta$hits
normalConHit03 <- normalContent03$response$meta$hits
normalConHit02 <- normalContent02$response$meta$hits
normalConHit01 <- normalContent01$response$meta$hits

hitVec <- c(artiConHit10, artiConHit09, artiConHit08, artiConHit07, artiConHit06, artiConHit05, artiConHit04, artiConHit03, artiConHit02, artiConHit01)
hitNormalVec <- c(normalConHit10,normalConHit09, normalConHit08, normalConHit07, normalConHit06, normalConHit05, normalConHit04, normalConHit03, normalConHit02, normalConHit01)
hitNormlizedVec <- hitVec / hitNormalVec
years <- c(year10, year09, year08, year07, year06, year05, year04, year03, year02, year01)
#Graph.Table
graphTable <- data.frame(years, hitVec, hitNormalVec, hitNormlizedVec)

Graph

barHitGraph <- ggplot(graphTable, aes(x=years, y=hitVec)) + geom_bar(stat="identity")
barNormLizedGraph <- ggplot(graphTable, aes(x=years, y=hitNormlizedVec)) + geom_bar(stat="identity")
barJustAbout <- ggplot(graphTable, aes(x=years, y=hitNormalVec)) + geom_bar(stat="identity")

if (input$radio == 1) {
barHitGraph + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 12),
axis.text.y = element_text(size = 12),
panel.background = element_blank())
} else if (input$radio == 2) {
barNormLizedGraph + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 12),
axis.text.y = element_text(size = 12),
panel.background = element_blank())
} else if (input$radio == 3) {
barJustAbout + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 12),
axis.text.y = element_text(size = 12),
panel.background = element_blank())
}

}
}) # End of Isolate

}) #Graph Hit

output$graphMonth <- renderPlot({
input$yearMonthButton
isolate({
articleSearch <- articleSearch()
if (articleSearch != "") {
yearMonth <- yearMonth()

month12End <- paste(yearMonth, "1231", sep = "")
month12 <- paste(yearMonth, "1201", sep = "")
month11 <- paste(yearMonth, "1101", sep = "")
month10 <- paste(yearMonth, "1001", sep = "")
month09 <- paste(yearMonth, "0901", sep = "")
month08 <- paste(yearMonth, "0801", sep = "")
month07 <- paste(yearMonth, "0701", sep = "")
month06 <- paste(yearMonth, "0601", sep = "")
month05 <- paste(yearMonth, "0501", sep = "")
month04 <- paste(yearMonth, "0401", sep = "")
month03 <- paste(yearMonth, "0301", sep = "")
month02 <- paste(yearMonth, "0201", sep = "")
month01 <- paste(yearMonth, "0101", sep = "")

articleAddress <- "https://api.nytimes.com/svc/search/v2/articlesearch.json?q="

articleKey <- "&api-key=48c66fa2c448eda40826487d4f19a018:0:71658152"
hitMon12 <- paste(articleAddress, articleSearch,"&begin_date=",month12,"&end_date=",month12End,articleKey, sep = "")
hitMon11 <- paste(articleAddress, articleSearch,"&begin_date=",month11,"&end_date=",month12,articleKey, sep = "")
hitMon10 <- paste(articleAddress, articleSearch,"&begin_date=",month10,"&end_date=",month11,articleKey, sep = "")
hitMon09 <- paste(articleAddress, articleSearch,"&begin_date=",month09,"&end_date=",month10,articleKey, sep = "")
hitMon08 <- paste(articleAddress, articleSearch,"&begin_date=",month08,"&end_date=",month09,articleKey, sep = "")
hitMon07 <- paste(articleAddress, articleSearch,"&begin_date=",month07,"&end_date=",month08,articleKey, sep = "")
hitMon06 <- paste(articleAddress, articleSearch,"&begin_date=",month06,"&end_date=",month07,articleKey, sep = "")
hitMon05 <- paste(articleAddress, articleSearch,"&begin_date=",month05,"&end_date=",month06,articleKey, sep = "")
hitMon04 <- paste(articleAddress, articleSearch,"&begin_date=",month04,"&end_date=",month05,articleKey, sep = "")
hitMon03 <- paste(articleAddress, articleSearch,"&begin_date=",month03,"&end_date=",month04,articleKey, sep = "")
hitMon02 <- paste(articleAddress, articleSearch,"&begin_date=",month02,"&end_date=",month03,articleKey, sep = "")
hitMon01 <- paste(articleAddress, articleSearch,"&begin_date=",month01,"&end_date=",month02,articleKey, sep = "")

articleMonth12 <- fromJSON(hitMon12)
articleMonth11 <- fromJSON(hitMon11)
articleMonth10 <- fromJSON(hitMon10)
Sys.sleep(0.5)
articleMonth09 <- fromJSON(hitMon09)
articleMonth08 <- fromJSON(hitMon08)
articleMonth07 <- fromJSON(hitMon07)
Sys.sleep(1.1)
articleMonth06 <- fromJSON(hitMon06)
articleMonth05 <- fromJSON(hitMon05)
articleMonth04 <- fromJSON(hitMon04)
Sys.sleep(0.5)
articleMonth03 <- fromJSON(hitMon03)
articleMonth02 <- fromJSON(hitMon02)
articleMonth01 <- fromJSON(hitMon01)

hitMonVec <- c( articleMonth12$response$meta$hits, articleMonth11$response$meta$hits, articleMonth10$response$meta$hits,
articleMonth09$response$meta$hits, articleMonth08$response$meta$hits, articleMonth07$response$meta$hits, articleMonth06$response$meta$hits,
articleMonth05$response$meta$hits, articleMonth04$response$meta$hits, articleMonth03$response$meta$hits, articleMonth02$response$meta$hits,
articleMonth01$response$meta$hits)

monthVec <- c(month12, month11, month10, month09, month08, month07, month06, month05, month04, month03,
month02, month01)
graphMonthTable <- data.frame(monthVec, hitMonVec)

barMonthGraph <- ggplot(graphMonthTable, aes(x=monthVec, y=hitMonVec)) + geom_bar(stat="identity")
barMonthGraph + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 12),
axis.text.y = element_text(size = 12),
panel.background = element_blank())
} # End of articleSearch != 0
})#end of isolate
})#End of graphMonth

output$articleExam <- DT::renderDataTable({
input$goArticle
isolate({
articleSearch <- articleSearch()
yearInterest <- yearInterest()
yearInterest01 <- yearInterest - 1
monthInterest <- monthInterest()
pageInput <- pageInput()
pageInput <- paste("&page=", pageInput, sep = "")

yearForMonth <- yearInterest
yearInterest01 <- paste(yearInterest01, "0217", sep = "")
yearInterest <- paste(yearInterest, "0217", sep = "")

articleAddress <- "https://api.nytimes.com/svc/search/v2/articlesearch.json?q="

articleKey <- "&api-key=48c66fa2c448eda40826487d4f19a018:0:71658152"
articleAPI10 <- paste(articleAddress, articleSearch,"&begin_date=",yearInterest01,"&end_date=",yearInterest,pageInput,articleKey, sep = "")

if (articleSearch != "") {

articleContent10 <- fromJSON(articleAPI10)

artiConDoc10 <- articleContent10$response$docs

artiURL10 <- artiConDoc10$web_url
artiConSnippet10 <- artiConDoc10$snippet

table10 <- data.frame(artiConSnippet10, artiURL10)

#Articlae Table
if (input$radioMonth == 1 ) {
table10
} else if (input$radioMonth == 2) {
if (monthInterest <= 9) {
monthInterest 10) {
monthInterest <- as.character(monthInterest)
}

beginMon <- paste(yearForMonth, monthInterest, "01", sep = "" )
endMon <- paste(yearForMonth, monthInterest, "30", sep = "")
articleMonthAPI <- paste(articleAddress, articleSearch,"&begin_date=",beginMon,"&end_date=",endMon,articleKey, sep = "")
monthContent <- fromJSON(articleMonthAPI)
monConDoc <- monthContent$response$docs

#your_link <- monConDoc$web_url
#your_link <- paste0("link")
#monURL <- your_link
monSnippet <- monConDoc$snippet
tableMonth 6 ?",
"'' + data.substr(0, 20) + '...' : data;",
"}")
)))
)

output$rawAPI <- renderPrint({
articleSearch <- articleSearch()
if (articleSearch != "") {
yearInterest <- yearInterest()
yearInterest01 <- yearInterest - 1

yearInterest <- paste(yearInterest, "0217", sep = "")
yearInterest01 <- paste(yearInterest01, "0217", sep = "")

articleAddress <- "https://api.nytimes.com/svc/search/v2/articlesearch.json?q="

articleKey <- "&api-key=48c66fa2c448eda40826487d4f19a018:0:71658152"
articleAPI10 <- paste(articleAddress, articleSearch,"&begin_date=",yearInterest01,"&end_date=",yearInterest,"&sort=newest",articleKey, sep = "")

articleContent10 <- fromJSON(articleAPI10)
infoRawAPI <- c(yearInterest, articleSearch, articleContent10)
infoRawAPI
} #End of articleSearch
})

output$BillBoard <- renderDataTable({
billFrame <- read.csv("bill100Frame.csv", stringsAsFactors=FALSE)

billFrame$diff <- as.numeric(billFrame$diff)
billFrame <- data.frame(billFrame$x,billFrame$Title,
billFrame$Singer, billFrame$Ranking, billFrame$diff)
colnames(billFrame) <- c("Ranking", "Title", "Singer", "Last Ranking", "Ranking Difference")
billFrame
})

output$test03 <- renderPrint({
apiData <- fromJSON('/Volumes/64GB/NYC/project03/api2012.txt')
apiResu <- apiData$results
apiResu
})

output$textGeo <- renderDataTable({
input$goLonLat
isolate({
addressInput <- addressInput()
#https://maps.googleapis.com/maps/api/geocode/json?address=4620%20parsons%20blvd%20flushing%20ny%20%20key=AIzaSyANifkybPlJWYynG_FSwzwSn-CunJTE4N0
if (addressInput != "") {
jobsAdd <- paste("https://api.usa.gov/jobs/search.json?query=", addressInput, sep = "")
jobsAPI <- fromJSON(jobsAdd)

Sys.sleep(1)

addressVec <- jobsAPI$locations
newAddress <- c()
numberPosition <- c()
for (i in 1: length(jobsAPI$position_title)) {
unlisAdd 1) {
newAddress[i] <- as.character(unlisAdd[1])
numberPosition[i] <- length(unlisAdd[i])
}
numberPosition[i] <- 1
newAddress[i] <- as.character(unlisAdd)
}

addressVec <- newAddress

lonlatVec <-c()
latt <- c()
lont <- c()

i = 1
while (i <= length(addressVec)) {
address = addressVec[i]
address = gsub(" ", "", address)
geoWeb <- paste("https://maps.googleapis.com/maps/api/geocode/json?address=", address, "&key=AIzaSyANifkybPlJWYynG_FSwzwSn-CunJTE4N0", sep = "")
geoAPI <- fromJSON(geoWeb)
latt[i] <- geoAPI$results$geometry$location$lat
lont[i] <- geoAPI$results$geometry$location$lng
Sys.sleep(0.2)
i = i + 1
}

Title <- jobsAPI$position_title
MinWage <- jobsAPI$minimum
MaxWage <- jobsAPI$maximum
Organization <- jobsAPI$organization_name
Locations <- newAddress

jobTable <- data.frame(Title, Organization,numberPosition ,MinWage, MaxWage, Locations,latt, lont)
jobTable

}# end of if
})# end of isolate

})# end of output$textGeo
})

Python for Web Scrapping
from bs4 import BeautifulSoup
import requests
import csv

url = 'https://www.billboard.com/charts/hot-100'
r = requests.get(url)
soup = BeautifulSoup(r.text)

my_list = []
#for tag in soup.findAll("a", {"class":"chart-row__link"}):
for tag in soup.findAll("a", {"data-tracklabel":"Artist Name"}):
my_list.append(tag.text)

c = csv.writer(open("MYFILE.csv", "wb"))
c.writerow(my_list)
########################
song_list = []
for tag in soup.findAll("h2", {"class" : "chart-row__song"}):
#print (tag.text)
song_list.append(tag.text)
song_list.append("\n")

c = csv.writer(open("songName.csv", "wb"))
c.writerow(song_list)

############################
lastWeek = []

for tag in soup.findAll("span", {"class":"chart-row__last-week"}):
lastWeek.append(tag.text)
lastWeek.append("\n")
#print (tag.text)

c = csv.writer(open("lastWeek1.csv", "wb"))
c.writerow(lastWeek)

About Author

Wansang Lim

I recently completed MS computer science degree in New York University(Manhattan NY) concentrating on machine learning and big data . Before it, I studied software development and android development. Also, I am Ph.D of Agriculture with a lot...
View all posts by Wansang Lim >

Leave a Comment

Cancel reply

You must be logged in to post a comment.

example May 15, 2017
First of all I would like to say wonderful blog! I had a quick question that I'd like to ask if you don't mind. I was interested to find out how you center yourself and clear your thoughts prior to writing. I've had a difficult time clearing my mind in getting my thoughts out there. I do take pleasure in writing however it just seems like the first 10 to 15 minutes are usually wasted simply just trying to figure out how to begin. Any ideas or tips? Kudos!

View Posts by Categories

All Posts 2399 posts
AI 7 posts
AI Agent 2 posts
AI-based hotel recommendation 1 posts
AIForGood 1 posts
Alumni 60 posts
Animated Maps 1 posts
APIs 41 posts
Artificial Intelligence 2 posts
Artificial Intelligence 2 posts
AWS 13 posts
Banking 1 posts
Big Data 50 posts
Branch Analysis 1 posts
Capstone 206 posts
Career Education 7 posts
CLIP 1 posts
Community 72 posts
Congestion Zone 1 posts
Content Recommendation 1 posts
Cosine SImilarity 1 posts
Data Analysis 5 posts
Data Engineering 1 posts
Data Engineering 3 posts
Data Science 7 posts
Data Science News and Sharing 73 posts
Data Visualization 324 posts
Events 5 posts
Featured 37 posts
Function calling 1 posts
FutureTech 1 posts
Generative AI 5 posts
Hadoop 13 posts
Image Classification 1 posts
Innovation 2 posts
Kmeans Cluster 1 posts
LLM 6 posts
Machine Learning 364 posts
Marketing 1 posts
Meetup 144 posts
MLOPs 1 posts
Model Deployment 1 posts
Nagamas69 1 posts
NLP 1 posts
OpenAI 5 posts
OpenNYC Data 1 posts
pySpark 1 posts
Python 16 posts
Python 458 posts
Python data analysis 4 posts
Python Shiny 2 posts
R 404 posts
R Data Analysis 1 posts
R Shiny 560 posts
R Visualization 445 posts
RAG 1 posts
RoBERTa 1 posts
semantic rearch 2 posts
Spark 17 posts
SQL 1 posts
Streamlit 2 posts
Student Works 1687 posts
Tableau 12 posts
TensorFlow 3 posts
Traffic 1 posts
User Preference Modeling 1 posts
Vector database 2 posts
Web Scraping 483 posts
wukong138 1 posts

Our Recent Popular Posts

AI 4 AI: ChatGPT Unifies My Blog Posts
by Vinod Chugani
Dec 18, 2022
Meet Your Machine Learning Mentors: Kyle Gallatin
by Vivian Zhang
Nov 4, 2020
NICU Admissions and CCHD: Predicting Based on Data Analysis
by Paul Lee, Aron Berke, Bee Kim, Bettina Meier and Ira Villar
Jan 7, 2020

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day ChatGPT citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay football gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income industry Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI

NYC Data Science Academy

NYC Data Science Academy teaches data science, trains companies and their employees to better profit from data, excels at big data project consulting, and connects trained Data Scientists to our industry.

NYC Data Science Academy is licensed by New York State Education Department.

Get detailed curriculum information about our
amazing bootcamp!

Please enter a valid email address
Sign up completed. Thank you!

Offerings

  • HOME
  • DATA SCIENCE BOOTCAMP
  • ONLINE DATA SCIENCE BOOTCAMP
  • Professional Development Courses
  • CORPORATE OFFERINGS
  • HIRING PARTNERS
  • About

  • About Us
  • Alumni
  • Blog
  • FAQ
  • Contact Us
  • Refund Policy
  • Join Us
  • SOCIAL MEDIA

    © 2025 NYC Data Science Academy
    All rights reserved. | Site Map
    Privacy Policy | Terms of Service
    Bootcamp Application