Social Capital Metrics: A Data Analysis and Scraping
The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Social capital takes diverse forms. It is a society's ability to build cooperative institutions that promote stability and prosperity. It is also the ease with which individuals facilitate transactions with social value, such as smiles between friends or neighborly willingness to watch over the children at play. Such interactions are valued according to culture, social class, and other determinants of spatiotemporal norms. Capital is therefore incredibly varied and hard to define. Yet with the rise of modern tech giants, from social networks to news aggregators, social capital is indisputably a source of power and profit.
This project aims to look at one form of capital: information flow. Specifically, the generation of quality information in communities with niche, expert interests. By scraping, cleaning, and analysing data off one such online community, we discuss how preliminary metrics for informational social capital might be formulated and extended into business-ready formats.
The code for this project can be found here.
lobste.rs purports to be a more focused version of Hacker News. Since 2007, HN has served as the online newsfeed for Silicon Valley. By 2012, members of the original community were discontented; feeling HN had grown too quickly and the quality of its content diluted, they broke off to create a stripped-down discussion space for purely technical news.
lobste.rs remains a small community with a reputation for high quality content. The community is gated — anyone can read, but becoming a member requires a referral.
We acquired data by scraping the entire website via Scrapy. At time of scraping, lobste.rs had 2,153 pages of content. We obtained a dataset of 53,821 posts:
And of 188,355 comments:
We chose to focus on numbers of posts, upvotes, posters, and commenters as simple measures of capital. Post-cleaning, we aggregated the data from daily into weekly counts.
We found tags to be a poor categorisation scheme and so did not incorporate it into analysis. There are, for instance, only 32 pages of AI posts and 15 pages of machine learning posts, having existed for two and four years, respectively. The majority of AI/ML posts are buried across other tags prior.
We see that although posting grows relatively slowly, counting in the low hundreds per week, it is the essential bedrock for a trending edifice of comments, upvotes per post, and upvotes per discussion.
October of 2015 is an inflection point where upvotes per discussion overtake upvotes per post. Overall, readership via upvotes grows at a much faster rate than content generation via posts and comments, which also grows at a steady pace.
The ratio of uncommented to commented posts is fairly even, but commented posts make up nearly 90% of total upvotes. Relevant questions to ask are whether uncommented posts are a sign of low quality and, if so, a necessary business cost to achieving high-quality content? One way of measuring this might be by comparing spikes in Google views for referred domains whenever a post is made.
The vast majority of post discussions end within 2-3 days of their start. Additional analysis might be undertaken by looking into whether there is a relationship between the number and quality of comments per time duration.
The upper percentile of upvotes per page takes a wide spread to 56 votes, but the median count is relatively low at 4.5. This is a theme recurrent in niche communities: data is sparse, meaning that the majority of counts are relatively low. Moreover, there are strong, significant outliers.
We see this, for instance, where relatively few members of the community win an outsized chunk of the total upvotes awarded, by orders of magnitude.
Put differently, the top 15 posters take 33 percent of all upvotes given to posts, whilst the top 15 commenters take 20 percent of all upvotes given to comments.
Such users are not uncommon in network analysis; in fact, they are called superusers. They are the primary drivers of content and the core upon whom any business built off community outreach must learn to leverage.
We discuss some simple building blocks towards network analysis and a measure of information flow. We might follow up by scraping additional data on our superusers — the start dates of their accounts, for instance — and tracking the meta-structure of nested comments to distinguish high-impact, response-generating comments.
Due to time constraints, we were unable to analyse most of our textual data: the comments and posts. If we had done so, we might be able to achieve a measure of information density.
Ultimately what any social business should care about are robust metrics for information flow versus density. Ideally, businesses should learn to maximise both. However, there tends to be a tradeoff. Greater volumes of social capital are traded in larger communities, but larger communities tend to result in lower-quality content.
And if fundamental features of the community change with excessive growth, then it risks losing the superusers upon whom it depends.
All these factors, with further statistical sophistication, can feed into a fundamental network analysis that drives business decisions. Even retail businesses have leveraged the social capital of subcultures into highly successful brands. Ralph Lauren is perhaps the classic case study, in their mutual partnerships with, for instance, black streetwear, western, and preppy subcultures.
If we were putting together a pitch deck right now to secure venture funding on a social startup, we would equip ourselves with these more developed metrics as proof of traction. To wrap up our pitch, it would be only natural to run an ARIMA forecast that projects growth metrics of social capital, provided our network fundamentals have real staying power.