Exploring The Met Museum Collection

Belinda Kanpetch
Posted on Jun 2, 2016

Contributed by Belinda Kanpetch. She is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April 11th to July 1st, 2016. This post is based on her third class project - Web Scraping (due on the 6th week of the program).


The Met Museum is the largest art museum in the United States in both the size of its art collection and its physical area.  With over 2 million pieces distributed among the three main locations, The Met Museum on Fifth Avenue, The Met Cloisters, and The Met Breuer, the Met collection can be thought of as a time capsule of world history and culture.

As I frequently traverse the various collections at The Met on Fifth Avenue that are organized by geographic regions, I sometimes see similarities in the artifacts I find.  Using data scraped from The Met collection online, I wondered if visualizing features of each artifact could make cultural patterns more visible.

Some questions:

  • Which culture or department has the largest number artwork in their collection?
  • Which artists are most represented in the collection?
  • What types of artwork are most prominent in the overall collection? (ceramics, textiles, paintings, sculpture, etc)
  • What time period or era is most represented?
  • What types of artwork were being produced around the world in a particular era?

The website and scraping:

The collection of artifacts can be found at www.themetmuseum.org/art/collection.  Although not all of the pieces are online, The Met is adding pieces daily.  On the day the data was scraped there were over 410,000 items available online.

The main collection page is organized in the structure of notecards where each artifact has a photo, piece name, artist, location, date, medium, accession number, and if it is on view. To see more photos or get more details on each artifact, the user must click on the notecard.

Upon initial inspection of a few artifact detail pages, I noticed a few consistent features (namely those mentioned above) but there were also additional features that characterized each artifact which seemed important to include.

Scraping the data:

The overall website is organized and structured very well. Navigating to find tags and xpaths was fairly easy and understandable.

The main collection page displays 20 cards with a ‘Show More’ button at the bottom. Clicking the ‘Show More’ button would retrieve 20 more cards and add them to the page. Because of this design component I had to use Selenium to click on the ‘Show More’ button as many times as needed until all the artifact cards are loaded.

prj3 presentation_Page_05

prj3 presentation_Page_06

After all of the cards are loaded, Scrapy is deployed to scrape the main collections page for the artifact number.  This returns a list of numbers that can be appended to the main collection page url to get to the artifact detail page. Using this list, Selenium virtually clicks to open the artifact detail page and Scrapy scrapes for the each unique feature.

For each artifact detail page the title and on-view information are contained in their own div tag and were therefore easier to extract as features.  The other inconsistent features were contained in one div tag or class named ‘tombstone’.  Because the features in this tag are inconsistent they were extracted as a dictionary of key, value pairs.

Once all pages were scraped they were sent to MongoDB.

Screen Shot 2016-06-02 at 3.00.35 PM Screen Shot 2016-06-02 at 3.00.53 PM

Cleaning data and exploration:

After loading the database into Python, the artifact detail feature, which contains the dictionary of artifact keys, into distinct columns.

prj3 presentation_Page_08 prj3 presentation_Page_09

After unpacking these features and doing some simple visualization it was evident that within each column the values were not easily manageable by distinct factors.  For instance, there were 98 distinct values for the ‘Classification’ feature.  This was consistent among all the features which made further analysis difficult without having extensive knowledge of art history.prj3 presentation_Page_10

The easiest feature to quickly decipher was the date feature.  Filtering out the dates with ‘BC’ and visualizing the number of artifacts that dated BC, AC, or not on view at all.

prj3 presentation_Page_17 prj3 presentation_Page_18

Conclusion and next steps:

Although I didn't get to the analysis that I had set out to do initially, I would inform the analysis by researching a little more into the Met departments. Intuitively, I think understanding their structure might reveal patterns in how values are determined.

Code can be found on git hub.

About Author

Belinda Kanpetch

Belinda Kanpetch

Belinda hails from the Bayou City, where she earned her Bachelors of Architecture from the University of Houston. Drawn to the bright lights and energy of the city, she relocated to New York to attend Columbia University at...
View all posts by Belinda Kanpetch >

Related Articles

Leave a Comment

selenium videos June 8, 2016
Heya i will be initially in this article. I found this kind of mother board so i to get It really beneficial & that reduced the problem out and about considerably.. selenium videos Hopefully to provide a very important factor returning along with assist other people as you served us.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp