NYC Data Science Academy| Blog
Bootcamps
Lifetime Job Support Available Financing Available
Bootcamps
Data Science with Machine Learning Flagship ๐Ÿ† Data Analytics Bootcamp Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lesson
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories Testimonials Alumni Directory Alumni Exclusive Study Program
Courses
View Bundled Courses
Financing Available
Bootcamp Prep Popular ๐Ÿ”ฅ Data Science Mastery Data Science Launchpad with Python View AI Courses Generative AI for Everyone New ๐ŸŽ‰ Generative AI for Finance New ๐ŸŽ‰ Generative AI for Marketing New ๐ŸŽ‰
Bundle Up
Learn More and Save More
Combination of data science courses.
View Data Science Courses
Beginner
Introductory Python
Intermediate
Data Science Python: Data Analysis and Visualization Popular ๐Ÿ”ฅ Data Science R: Data Analysis and Visualization
Advanced
Data Science Python: Machine Learning Popular ๐Ÿ”ฅ Data Science R: Machine Learning Designing and Implementing Production MLOps New ๐ŸŽ‰ Natural Language Processing for Production (NLP) New ๐ŸŽ‰
Find Inspiration
Get Course Recommendation Must Try ๐Ÿ’Ž An Ultimate Guide to Become a Data Scientist
For Companies
For Companies
Corporate Offerings Hiring Partners Candidate Portfolio Hire Our Graduates
Students Work
Students Work
All Posts Capstone Data Visualization Machine Learning Python Projects R Projects
Tutorials
About
About
About Us Accreditation Contact Us Join Us FAQ Webinars Subscription An Ultimate Guide to
Become a Data Scientist
    Login
NYC Data Science Acedemy
Bootcamps
Courses
Students Work
About
Bootcamps
Bootcamps
Data Science with Machine Learning Flagship
Data Analytics Bootcamp
Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lessons
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook
Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories
Testimonials
Alumni Directory
Alumni Exclusive Study Program
Courses
Bundles
financing available
View All Bundles
Bootcamp Prep
Data Science Mastery
Data Science Launchpad with Python NEW!
View AI Courses
Generative AI for Everyone
Generative AI for Finance
Generative AI for Marketing
View Data Science Courses
View All Professional Development Courses
Beginner
Introductory Python
Intermediate
Python: Data Analysis and Visualization
R: Data Analysis and Visualization
Advanced
Python: Machine Learning
R: Machine Learning
Designing and Implementing Production MLOps
Natural Language Processing for Production (NLP)
For Companies
Corporate Offerings
Hiring Partners
Candidate Portfolio
Hire Our Graduates
Students Work
All Posts
Capstone
Data Visualization
Machine Learning
Python Projects
R Projects
About
Accreditation
About Us
Contact Us
Join Us
FAQ
Webinars
Subscription
An Ultimate Guide to Become a Data Scientist
Tutorials
Data Analytics
  • Learn Pandas
  • Learn NumPy
  • Learn SciPy
  • Learn Matplotlib
Machine Learning
  • Boosting
  • Random Forest
  • Linear Regression
  • Decision Tree
  • PCA
Interview by Companies
  • JPMC
  • Google
  • Facebook
Artificial Intelligence
  • Learn Generative AI
  • Learn ChatGPT-3.5
  • Learn ChatGPT-4
  • Learn Google Bard
Coding
  • Learn Python
  • Learn SQL
  • Learn MySQL
  • Learn NoSQL
  • Learn PySpark
  • Learn PyTorch
Interview Questions
  • Python Hard
  • R Easy
  • R Hard
  • SQL Easy
  • SQL Hard
  • Python Easy
Data Science Blog > Capstone > Caterpillar Tube Assembly Pricing

Caterpillar Tube Assembly Pricing

Adam Cone and Ismael Jaime Cruz
Posted on Jun 30, 2016

Contributed by by Adam Cone and Ismael Cruz. They are currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between April  to April 1st, 2016. This post is based on their Capstone Project due in the final week of the program).

Introduction

Our challenge came from a Caterpillar-sponsored Kaggle competition that closed in August, 2015. 1,323 teams competed.

Caterpillar Inc., is an American corporation which designs, manufactures, markets and sells machinery, and engines. Caterpillar is the world's leading manufacturer of construction and mining equipment, diesel and natural gas engines, industrial gas turbines and diesel-electric locomotives. Caterpillar's line of machines range from tracked tractors to hydraulic excavators, backhoe loaders, motor graders, off-highway trucks, wheel loaders, agricultural tractors and locomotives. Caterpillar machinery is used in the construction, road-building, mining, forestry, energy, transportation and material-handling industries. (from Wikipedia)

Caterpillar_Equipment

Caterpillar relies on a variety of suppliers to manufacture tube assemblies for their heavy equipment. Our task was to predict the price a supplier will quote for a given tube assembly.

Objective: predict the quote price of a tube assembly, given past tube assembly price quotes

We programmed in Python using Jupyter Notebook. All our code is available on github here.

Data

A tube assembly consists of a tube and components. There are 11 types of components and a tube assembly can have any of these types in various quantities, or no components at all. In addition, the tube can be bent multiple times with a variable bend radius, and the tube ends can be prepared in various ways.

Our original data consisted of 21 different csv files. Each file contained information about different aspects of the tube assemblies. For example, tube.csv consisted of tube-specific information, such as the tube length, how many bends the tube had, the tube diameter, etc. We imported each of the csv files into our notebook as independent pandas data frames.

Our first thought was to simply join the different data frames into a single data frame for all analysis. However, this would have resulted in a data frame with many missing values, since most tube assemblies lacked most components. For example, tube assembly TA-00001 had 4 components: 2 nuts and 2 sleeves. TA-00001 had no adaptor. The adaptor data frame had 20  adaptor-specific variables (columns), for example adaptor_angle. A left join of our tube data frame to our adaptor data frame would mean that TA-00001 would have a column for adaptor angle, even though TA-00001 has no adaptor component. This would result in a missing value in this column. Furthermore, similar missing values, would be problematic to impute, since their origin is structural and not random. Therefore, instead of joining all the data frames, we engineered new features to add as much information from the components data frames, while introducing no missing values in the train and test data frames.

Although we couldn't join most of the 21 csv files, we did join the tube data frame to the training set data frame without introducing new missing values. To get information from the remaining csv files, we engineered 3 new features that summarized data about the components. For example, with one new engineered feature, we saw that TA-00001 has two types of components, although we wouldn't see what these components are, let alone details of the components available in the related csv files. Later, we added 11 additional features: each a count a particular component type of each tube assembly. For example, with the 11 new engineered features, we see that TA-00001 has 2 nuts and 2 sleeves. Because we weren't sure whether these additional 11 columns would be useful in predicting tube assembling prices, we built two data frames for model fitting: basic data frame (did not include 11 new feature engineered columns) and components data frame (includes 11 new feature engineered columns).

For missing values in our categorical data (e.g. tube specifications), we added a new NaN level, obviating the need to impute or throw away observations. The only numerical missing values were in bend_radius: 30 total in all 60,448 test and training observations (~0.05%). We imputed these bend_radius values with the mean bend_radius, since we found no pattern to the missingness and there were relatively few missing values. Next, we converted our categorical variables to dummy variables, where each factor level became a column with values of either 0 or 1. Lastly, we converted the quote dates into days after the earliest date.

After the above changes, we had two complete (i.e. no-missingness), entirely numerical data frames to use for model fitting and prediction: basic (158 columns) and components (169 columns).

Modeling

We used three tree-based models:

  1. decision trees,
  2. random forests, and
  3. gradient boosting.

We fit each model to each of the two training sets (basic and components). We compared each model's performance on the basic and components data frames, and against the other two tree-based models. We expected better predictions from the models fitted to the components data frame than models fitted to the basic data frame. In addition, we expected gradient boosting to outperform random forests, and that both would outperform decision trees.

For our decision trees, we made four attempts, each time changing the tuning parameter grid. These were labeled as A1, A2, A3, and A4 on the x-axis of the graph below. The performance of each decision tree was measured using the percentage of Kaggle teams outperformed. In the figure below, we see that from A1 to A3, the models fitted with the components data frame outperformed those fitted with the basic data frame. The tuning parameters for A1-A3 were min_samples_leaf (the minimum number of observations in a node) and min_samples_split (the minimum number of observations necessary to split a node).

For A4, we increased the computation time, but changed the tuning parameter to max_leaf_nodes (the maximum number of nodes in the final tree), and left min_samples_split and min_samples_leaf as defaults. Despite the increased computation time, A4 performed worse than any of the previous attempts. Furthermore, the performance of the model fitted to the components data frame dropped in comparison to the performance of the model fitted to basic frame.

dt_performance

For our random forests, we made two attempts, each time changing the tuning parameter grid. These were labeled as A1 and A2 on the x-axis of the graph below. In the figure below, we see that for A1, the model fitted with the components data frame outperformed that fitted with the basic data frame. The tuning parameters for A1 were min_samples_leaf and min_samples_split, and we fixed the n_estimators (the number of trees) at 100.

For A2, we increased the computation time, increased n_estimators to 1,000, changed the tuning parameter to max_leaf_nodes, and left min_samples_split and min_samples_leaf as defaults. Despite the increased computation time, A2 performed worse than A1.

rf_performance

For our gradient boosting, we made two attempts, each time with a different tuning parameter grid. These were labeled as A1 and A2 on the x-axis of the graph below. In the figure below, we see that for A1, the model fitted with the components data frame performed identically to that fitted with the basic data frame. The only tuning parameter for A1 was learning_rate: n_estimators was fixed at 100 and max_leaf_nodes was fixed at 2 (other parameters were defaults). max_leaf_nodes fixed at two corresponds to 'stumps': trees with exactly one split.

For A2, we increased the computation time, increased n_estimators to 1,000, changed the tuning parameter grid to three values of learning-rate between 0.001 and 0.01, set max_leaf_nodes to 2, and left other parameters as defaults. Despite the increased computation time, A2 generated nonsensical results. Specifically, A2 on both basic and components data frames predicted multiple negative costs for tube assembly quotes.

gb_performance

Finally, the highest-performing models for each tree-based method (all fitted to the component data frame) are compared in the bar plot below. The random forest and the decision tree performed similarly, and both outperformed gradient boosting.

models_performance

Lessons

We anticipated that the models fitted to the component data frame would outperform the models fitted to the basic data frame. We did not consistently observe this. For the decision tree, A4 showed better results for the basic data frame than for the component data frame. For gradient boosting, A1 showed identical performance for models fit on the data frames. Neither of the A2 gradient boosting models had intelligible results at all. Therefore, we conclude that our additional feature engineering did not consistently improve prediction accuracy. Although in 5 of our 8 documented modeling attempts, models fit to the components data frame performed better, this improved performance seemed to also depend on the type of model used and the tuning parameters chosen.

Furthermore, we anticipated that gradient boosting would outperform random forests and decision trees. We did not observe this. In fact, our best gradient boosting model outperformed ~8% of teams, whereas our worst decision tree outperformed ~12% and our worst random forest outperformed ~14%. Again, our gradient boosting model even generated nonsensical results: decision trees and random forests never did. The A2 gradient boosting method predicted nonsensical results in the form of negative numbers for both data frames. The interpretation of these results is that the supplier would pay Caterpillar for each tube assembly ordered.

In our final decision tree and random forest attempts, performance decreased sharply when we chose a different tuning parameter, despite investing more computation time in searching the parameter grid. This suggests that choice of tuning parameter can be more important for decision trees than  raw computational power in the tuning process.

Similarly, the performance for A1 gradient boosting was identical for both the basic and components data frames. We suspect that by choosing 100 'stumps' (trees with exactly one split), we were able to access at most only 100 features in our feature space (158 for basic; 169 for components). So, it's possible that the additional 11 features in the components data frame were not used at all in the gradient boosting algorithm, so the decision trees, and therefore predictions, were identical in both cases. In this case, by using fewer stumps than features in our feature space, we limit the algorithm's use of the data.

Finally, we logged several of our parameter tuning runs. For example, our log for the decision tree model using the components data frame is illustrated in the table below. We found this logging process helpful in diagnosing the changes in performance of each of the models.

dt_components_tuning_parameters

Improvements

With more time to work on this project, we would be even more diligent and detailed in our logging of parameter tuning attempts. Specifically, we would also log the date, time of day, computation duration, R^2 of test and train, and parameter grid ranges for each parameter.

Also, we would like to tune our parameters based on the Kaggle metric: the Root Mean Squared Logarithmic Error (RMSLE):

rmsle_metric

not on the coefficient of determination, which is the default tuning metric in the Python scikit-learn library. We hope that by using the RMSLE, we would achieve better results by creating each split in each tree with specific sensitivity to the Kaggle competition.

Finally, we would like to try to use Spark to parallel process and improve our computation time.

About Authors

Adam Cone

Adam Cone received his BA with honors in Math from NYU, where he conducted original research in computational neural science and experimental fluid mechanics. He received his MA in Applied Math from UCLA, concentrating in differential equations and...
View all posts by Adam Cone >

Ismael Jaime Cruz

Ismaelโ€™s roots are in finance and statistics. He has six years of experience in such areas as financial analysis, trading and portfolio management. He was part of the team that launched the very first exchange-traded-fund in the Philippines....
View all posts by Ismael Jaime Cruz >

Leave a Comment

Cancel reply

You must be logged in to post a comment.

Krista May 17, 2017
This is a matter close to my heart cheers. Thanks

View Posts by Categories

All Posts 2399 posts
AI 7 posts
AI Agent 2 posts
AI-based hotel recommendation 1 posts
AIForGood 1 posts
Alumni 60 posts
Animated Maps 1 posts
APIs 41 posts
Artificial Intelligence 2 posts
Artificial Intelligence 2 posts
AWS 13 posts
Banking 1 posts
Big Data 50 posts
Branch Analysis 1 posts
Capstone 206 posts
Career Education 7 posts
CLIP 1 posts
Community 72 posts
Congestion Zone 1 posts
Content Recommendation 1 posts
Cosine SImilarity 1 posts
Data Analysis 5 posts
Data Engineering 1 posts
Data Engineering 3 posts
Data Science 7 posts
Data Science News and Sharing 73 posts
Data Visualization 324 posts
Events 5 posts
Featured 37 posts
Function calling 1 posts
FutureTech 1 posts
Generative AI 5 posts
Hadoop 13 posts
Image Classification 1 posts
Innovation 2 posts
Kmeans Cluster 1 posts
LLM 6 posts
Machine Learning 364 posts
Marketing 1 posts
Meetup 144 posts
MLOPs 1 posts
Model Deployment 1 posts
Nagamas69 1 posts
NLP 1 posts
OpenAI 5 posts
OpenNYC Data 1 posts
pySpark 1 posts
Python 16 posts
Python 458 posts
Python data analysis 4 posts
Python Shiny 2 posts
R 404 posts
R Data Analysis 1 posts
R Shiny 560 posts
R Visualization 445 posts
RAG 1 posts
RoBERTa 1 posts
semantic rearch 2 posts
Spark 17 posts
SQL 1 posts
Streamlit 2 posts
Student Works 1687 posts
Tableau 12 posts
TensorFlow 3 posts
Traffic 1 posts
User Preference Modeling 1 posts
Vector database 2 posts
Web Scraping 483 posts
wukong138 1 posts

Our Recent Popular Posts

AI 4 AI: ChatGPT Unifies My Blog Posts
by Vinod Chugani
Dec 18, 2022
Meet Your Machine Learning Mentors: Kyle Gallatin
by Vivian Zhang
Nov 4, 2020
NICU Admissions and CCHD: Predicting Based on Data Analysis
by Paul Lee, Aron Berke, Bee Kim, Bettina Meier and Ira Villar
Jan 7, 2020

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day ChatGPT citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay football gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income industry Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI

NYC Data Science Academy

NYC Data Science Academy teaches data science, trains companies and their employees to better profit from data, excels at big data project consulting, and connects trained Data Scientists to our industry.

NYC Data Science Academy is licensed by New York State Education Department.

Get detailed curriculum information about our
amazing bootcamp!

Please enter a valid email address
Sign up completed. Thank you!

Offerings

  • HOME
  • DATA SCIENCE BOOTCAMP
  • ONLINE DATA SCIENCE BOOTCAMP
  • Professional Development Courses
  • CORPORATE OFFERINGS
  • HIRING PARTNERS
  • About

  • About Us
  • Alumni
  • Blog
  • FAQ
  • Contact Us
  • Refund Policy
  • Join Us
  • SOCIAL MEDIA

    ยฉ 2025 NYC Data Science Academy
    All rights reserved. | Site Map
    Privacy Policy | Terms of Service
    Bootcamp Application