Who's Afraid of the Hypebeast?
Now you may be asking yourself: what's a hypebeast? Why should I be afraid of one? If you've made it to this point without knowing what a hypebeast is, congratulations, I'm here to ruin your day. A Hypebeast is a person who is obsessed with street-wear; you can typically find them lined up around the block on any given weekend in SoHo, outside stores like Off-White, Givenchy, Supreme, awaiting the newest 'drop' or release of merchandise. They wait in the same way people wait for the newest iPhone, or video game console; pitching tents and sleeping out overnight. The reason for this is two fold: while some people just enjoy looking good and getting hype (re: excited), others have noticed that the resell value of these clothes can be greatly affected by the perceived value of the brand. We now introduce the website that will be the focus of our project: Grailed.
What's a Grailed?
Grailed is a resale market much like Ebay, but with a specific focus on street-wear and 'high fashion' brands. Users can create accounts and list items on the website as long as they are vaguely related to street wear. They have very many sections on the website for all different types of clothing, but for this project we'll focus on the humble t-shirt.
The reasoning behind this selection is that the t-shirt is a fairly uniform piece of clothing. Regardless of what brand the t-shirt is, they are usually made from cotton, are crew neck, have short sleeves, and generally feel the same when you wear them. Therefore we can attempt to compare brands and assign a "price multiplier" which encapsulates how much more the brand is valued than the least expensive brand. Or, in other words, a simple metric to show you which brand will cost you the most.
So which brands cost the most?
After nearly forty hours of scraping, we collected 210,000+ entries from 846 designers. We'll go into the technical detail at the end of the blogpost, but for now we'll stick to the analysis of the data. We'll trim down our dataset at little more by requiring the brand to have at least 1000 entries. This left us with 31 brands in total. We selected 10 of these brands for now in order to have our visualizations be a little clearer.
We see from the graph above a first look at the distribution of the prices. Notice how brands that we would associate with affordability (Gildan, Fruit of the Loom, Hanes) all are clustered around the $0-$50 price range. Additionally, mid-tier brands appear such as Bape, Anti Social Social Club, and Adidas taking up the middle section. This allows us to start guessing at which brand might be our cheapest brand, but where are our luxury brands? Off-White, Givenchy, and Supreme are in there, but a little difficult to see. Let's clean it up:
Notice how Off-White and Givenchy are nearly the same? We can see from the density plots of prices of listings the natural groupings of brands; how certain brands are much more 'highly valued' while others only have a few items that command a price over $200. For Givenchy and Off-White, it seems that the average price would be a bit more than $200. Well, lets investigate a little further in order to head into our statistical tests with a good idea of what to look for:
From the graph we can further group these brands, and for our analysis we'll have to specifically focus on Gildan, Fruit of the Loom, and Hanes which all seem to occupy the same bottom spot, but are they grouped close enough together to justify using any of them as our metric? ANOVA and MANOVA testing reveals that Fruit of the Loom and Hanes have similar average price (p = 0.645), while also showing that Gildan is less than both Fruit of the Loom (p < 0.05) and Hanes (p < 0.05). There is one other brand, the NFL brand, which is visualized in the next graph, which also has an equivalent average price with Gildan (p = 0.381). The reason for omitting this brand is because the majority of "t-shirts" from it are actually jerseys, which are a very different type of clothing which does not fit with our initial definition of t-shirt.
Therefore, we take Gildan to be our lowest priced brand, with an average listing price of $30.99 per t-shirt. We'll take this to be our "one" in our brand multiplier index. We want to make our data a bit more linear in order to understand the scaled difference of the brand versus the price as compared to the cheapest brand. We therefore, for each brand, take the average and then divide it by the average for Gildan, which gives us the following chart of brand price index, denoted "multiplier." We choose median in order to make sure that our averages are not being biased, and that they still fall in a generally linear pattern.
How to Stay Fresh in a Competitive Market, or, How to Make Money from Grailed Without Ever Really Trying
So, where do we go from here? What's the point of all of this? We've got all our fancy numbers, but why should you care?
The answer is very simple: making money.
There is one more thing we need to introduce, which is more a function of the Grailed website. Grailed allows you to "watch" an item, and notifies you via email that an item has dropped in price. So the seller does something interesting: they drop the price, the email gets sent out, they wait about an hour, and then they raise the price again. This drives views to their listing, and for most people they are not fast enough to be able to punish the seller for this attention manipulating behavior. It would go hand in hand that brands that are more expensive would have a higher rate of driving people to the listing if the price went down significantly... even if only for a couple of minutes.
With our price index, we now have a visualization of brands that would be more effective to target with a scraper which, perhaps, buys all items that have been reduced in price by a given % of more. While not used in our analysis, it is possible to scrape the last listed price without even having to click through to the listing. Therefore, theoretically, you could punish these sellers by scraping specific brands on an hourly (or even more frequent) basis. You purchase the item, then resell at the price that the seller was trying to list it at in the first place. Remember, from our index good brands to target are: Off-White, Givenchy, VLONE, and Comme des Garçons. Easy money!
A Few Notes on the Scraper
This process was like building two Selenium scrapers inside one another: one to loop through the filter process, and a second to parse the results on the page. When it finds a designer that meets the criteria of having 30 or more listings, it creates a file "hypebeast_DESIGNER_NAME.csv," then dumps the information from the page. My scraper should have just plowed through the rest of the items right?
Instead, we had some issues with scrolling, and selecting designer names that were more than one line long. Because this error was so inconsistent, I focused my effort on fixing another issue: the downtime between file creation. Before implementing a function that searches the current page for designer names and checks them against existing CSVs in my local directory, the downtime between new CSV file creation was nearly 20 minutes. Using this function I was able to get my downtime between file creation to an average of about 5 minutes. The general idea is this:
- Observe all loaded designer names.
- Check those names against the list of CSV files that have already been created.
- If there are matches, grab the last match in the list, scroll it into view, then find the index of that match in our list of loaded designer names.
- Trim our list of all loaded designers that are before the index of the designer we grabbed.
- Repeat the process until there are no more matches.
This was fairly easy to implement, and fixed one of my biggest issues with restarting the program after it crashes.
And the Program keeps running running, and running running and ....
As they say, in this contest, there is no disrespect; however you get the data works. With the time restraint on the project, I focused my effort into practical solutions for dealing with crashes. After developing the above methodology for "going back to where I was," I decided that making something that would allow me to automate the restart process. This was as simple as writing a windows batch file (you can make a shell script on UNIX machine). It was very simple, but allowed me to continue scraping throughout the night without a worry for errors, or the program crashing. The process was two steps:
Simple Windows Batch file to restart a python file using Anaconda Prompt. Note that the first two lines are actually one line.
call C:\Users\Olympus\Anaconda3\Scripts\activate.bat C:\Users\Olympus\Anaconda3
SET /A "index = 5"
SET /A "count = 0"
if %index% geq %count% (
SET /A "index = index + 1"
Setting the condition in the "if" statement to always be true ensures that the only way we can "exit out" of our program is with CTRL + D. This means that we'll keep running it, and running it, and running! Allowing me to get some much needed rest before I got down to the data analysis that I did above.
If you'd like to look at the program more, or try it out yourself, you can find a link to the GitHub here. All the necessary files for the scraper are Python files. Feel free to contact me either on my GitHub or via my LinkedIn.