ARTificial intelligence: do artists leave a clear visual signature?

Michael Griffin
Posted on Feb 10, 2020

Thirty-second summary:

  • As a fan of art and machine learning, I trained a set of convolutional neural networks to classify artwork and explore which styles and artists are most similar.
  • Identifying the subject of a picture is relatively easy - the model achieves 81% accuracy with <20 epochs. Artistic style can be identified with an accuracy of c.76% - symbolic pieces proved hardest to distinguish. Detecting the artist proved challenging with an accuracy of c.74% on ~100 artists - I expect that this still exceeds non-expert human performance.
  • This has been a really fun project to work on, allowing me to discover new styles and artists relating to my own work. I'm enthusiastic to progress this work further.
  • Tools used: libraries, PyTorch, Google Colab, github here

Preparing the dataset

I obtained a large Kaggle dataset of around 100,000 artworks, sourced chiefly from wikiart and covering a wide variety of artists, styles and genres. I wanted to focus on popular artworks and preserve reasonable training times so applied a few filters:

  • Filtered for 19th, 20th and 21st century art and selected the top artists, ranked by number of images. This yields around 100 artists with an average of 210 images per artist.
  • Within this group, I filtered the top 15 subjects and top 15 styles. I removed genres which relate to the medium rather than than the subject like "illustration" or "sketch and study". I also collapsed similar categories which are visually very hard to differentiate like “impressionism” and “post-impressionism” to “impressionistic”.
  • I used some image processing and limited augmentation – the images are compressed to have a consistent smallest dimension of 256 pixels. I also use small rotations of <5 degrees and magnification factors of 1- 1.2 to add some variety between training epochs.

This creates a dataset of around 20,000 images with a good mix of modern genres, styles and artists. I then split this 80:20 for training and validation - there was no specific design to split labels across train and validation sets but it's likely these will appear in both sets. Note that there are significant class imbalances but this did not cause training issues.

Interestingly, the problem set does not reduce to identifying the artists - many of the artists in the collection produce work spanning across genres and styles, often through different periods of their working lives.

Building the model

I used the tools which sit on top of the PyTorch library to apply a transfer learning approach. As a starting point I use a pre-trained ResNet 50 architecture (initially outlined here) which use skip-connections for strong performance on varied image recognition tasks.

I trained each problem (genre, style, artist) independently, although I am experimenting with transferring weights between tasks for efficiency. I employed a few tricks to obtain good results:

  1. Freezing all except the final layer for the first few epochs.
  2. Use differential learning rates across layers to focus learning in later layers.
  3. Using variable learning rate multipliers within epochs.

Detecting subjects

Given the ResNet model is already trained to recognise objects, detecting the subject of a painting might be expected to be the easiest task. Within 15 epochs I achieved a validation error rate of 19%, meaning around 1 in 5 images are misclassified. The confusion matrix below shows the pattern of errors between predictions and actual labels - cityscapes, landscapes and people and portraits are frequently confused and symbolic paintings are hard to differentiate.

Inspection of the largest losses - that is the cases where the model misclassified images with most certainty - is revealing. In several of these cases I would select the prediction ahead of the actual label; clearly there is judgement in many cases and multiple labels may be applicable to a single image (like a landscape view featuring buildings and people). Other errors are unsurprising like "religious paintings" being misclassified as "people and portraits". So the accuracy is probably underestimated and the model could plausibly be used to help enrich or correct labels.

Detecting style

Artistic style would be expected to be harder to detect - this captures subtle use of colour or brushstrokes​ and there are no clear boundaries between styles. Nevertheless, after 15 tailored training epochs I achieved an accuracy of 76% with a few interesting observations:

  • Realism can be difficult to distinguish from impressionistic art or romanticism but magic realism does appear distinguishable. Inspection of some of the errors suggests that the ground truth labelling may be incorrect in some cases.
  • Like the symbolic art genre, the style of symbolism is especially hard to identify correctly - images labelled symbolic were frequently misclassified as impressionistic, art nouveau, realism, romanticism or surrealism. This is unsurprising given the cultural understanding required to identify symbolism.

Detecting artists

With around 15 epochs the model yields an error rate of 24% which I consider to be fairly impressive since there are c.100 artists in the dataset. However, there is clearly room for further improvement as the Kaggle competition winner achieves an accuracy of above 90% on a broader dataset.

This suggests that artists do leave a clear signature in their pictures - although I'm investigating whether in some cases the model might be picking up the actual signatures!

Testing on specific paintings

I couldn't resist the temptation to try out the model on my own (mediocre) artwork - the piece below yields similarities to Martiros Saryan, who I had not discovered but does use vivid colours in a similar manner.

Further work

This work is ongoing and I'm exploring a few avenues:

  • Grouping the label categories to align better with the clusters used in current art marketplaces.
  • Testing other architectures and training approaches.
  • Looking at activation for different artist and pieces to identify similar work.
  • Based on visual inspection of a handful of test cases, it does appear that the style and artist detection networks (over)emphasise the use of colour. So I'm exploring options to use black-and-white images to reduce reliance on colour.

About Author

Michael Griffin

Michael Griffin

Mike Griffin is training at the NYC data academy and has several years of experience in strategy/analytics roles in finance. He studied Natural Sciences (Physics) at the University of Cambridge and Management at the Judge Business School. Mike...
View all posts by Michael Griffin >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp