Automate your Cover Letters with Python and mailmerge

Eugenia Dickson
Posted on Apr 27, 2021

When you look for a new position, attaching a Cover Letter to your application is very important: you want to stand out and showcase your personality and enthusiasm about the role and the company you’re applying to.

However, having to apply for dozens of positions every day turns writing cover letters a very time-consuming exercise, especially if you need all that valuable time to prepare for the actual interviews.

The best solution is to automate the routine of replacing text in the letter template while keeping the personalization.

I used this tutorial as a start and created a workflow that collects all varied text from a csv table, and creates personalized ready-to-go cover letters in seconds.

And here’s how you can do that too:

Creating a docx Template

So first step you need to create a cover letter template and replace all “moving parts” with Merge Fields.

You can use an existing cover letter and just replace the varied text with the fields.

To do so, select the text and go to the Insert tab, click Quick Parts button and select Field…

Then in the menu select MergeField and fill in the Field name and click OK:

After that, the text you selected should be replaced with <<field_name>>.

Each of the fields you create in the document will be a variable name that you’ll need to reference later in the code, so be mindful of these variable names.

It is also useful to create a date using Time field in the same menu, this way the date of the cover letter will always be actual.

When you’re done, save your docx document and go to the next step.

Creating a csv table

Then we’ll create a table in which you will fill in all varied text: company name, position name, personalized details etc.

It’s better to use the first line for variable names and make them the same as the field names you created in the docx template to avoid confusion.

Creating a Script

Note: the intendations in the code blocks can be off, I haven’t found how to deal with that yet. Please refer to the code on my GitHub.

First off, you need to install a few things:

# for populating the docx template
conda install lxml
pip install docx-mailmerge
# for converting created docx files to PDF
pip install docx2pdf

And load necessary libraries:

from __future__ import print_function
from mailmerge import MailMerge
from datetime import date
import pandas as pd
from docx2pdf import convert

Create path for your docx template and load your csv:

template = "Cover_letter_template.docx"
cl_text = pd.read_csv("cover_letter_text.csv")

This chunk of code creates a list of dictionaries each of which collects all text for one cover letter:

job_lst = []
for line_num in range(cl_text.shape[0]):
job_dict = {}
for col in cl_text.columns:
job_dict[col] = cl_text.loc[line_num, col]
job_lst.append(job_dict)

This chunk:

for job in job_lst:
    document = MailMerge(template)
    document.merge(
                  company_name=job['company_name'],
                  position_name=job['position_name'],
                  job_posting_source=job['job_posting_source'],
                  introduction=job['personalized_introduction'],
                  comment=job['personalized_industry_comment']
                    )
document.write(f'./Name_Surname_Cover_Letter_{job['company_name']}.docx')
document.close()

1. Goes through every dictionary in the list and fills in all the fields with the text from the csv file.

2. Creates a docx file with company name in it and saves it in the specified folder.

The variable names in bold font must be the same as the Merge Field names you have created in your docx template.

You can print the Merge Field names from your template in case you forgot them:

print(document.get_merge_fields())

Lastly, we need to convert the docx files to PDF format, and you can convert all files at once by specifying the path to the folder containing docx files:

convert("my_docx_folder/")

That’s all!

Don’t forget to check your results before you send them out, and good luck with your job search!

About Author

Eugenia Dickson

Eugenia Dickson

My background lies in the Building Industry: Structural Engineering and Building Information Modeling. As I’ve always been interested in technology and innovations, I worked on improving various processes an engineer encounters on a daily basis, first as an...
View all posts by Eugenia Dickson >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp