Talks Tech #28: Blending Art and Science to Identify Actionable Drivers

Talks Tech #28: Blending Art and Science to Identify Actionable Drivers

Written by Priyanka Khare

Podcast

Women Who Code Talks Tech 28     |     SpotifyiTunesGoogleYouTubeText

Priyanka Khare, Data Science Manager at The Home Depot, shares her talk, “Blending Art and Science to Identify Actionable Drivers of Shrink in Stores.” She defines shrink and discusses examples that are controllable versus uncontrollable at the store level. She shares the process of designing a system to improve shrink and the importance of speaking the language of your target audience when creating and implementing new processes.

I support Home Depot’s operations teams with data analysis needs, needed to reduce inventory shrinkage in our stores. Inventory shrinkage or shrink, as it’s commonly referred to, occurs when your actual inventory on hand is less than what is recorded in your books. A recent project that our team worked on blended the art and science to identify actionable drivers of shrink in our stores. We are the world’s largest home improvement retailer. We’ve grown significantly over the past few years. We added $40 billion to our top nine just in the last couple of years. We have a footprint in North America with over 2300 stores across the United States, Canada and Mexico. We employ over half a million associates, and the United States is the largest market for the company with over 2000 stores.

An average Home Depot store is about 105,000 square feet, with hundreds and thousands of SKUs per store. The scale at which we operate is pretty massive. Shrink in stores is a growing problem for the retail industry. The National Retail Federation pegs that number at 1.62% of net income in 2020, which is a significant $62 billion. Shrinking occurred due to a variety of reasons. Some are malicious, such as theft or fraud. You may have heard in the news more recently where stores are being targeted. This is a huge problem for the industry. Not only does it result in loss in terms of shrink, it also threatens the life and safety of retail workers and customers that shop there. Unfortunately, malicious activity is a function of the surrounding area of the store and it’s not always controllable by stores.

Other drivers are operational, which can cause shrink if processes are not followed as they’re intended. Consider this example, a customer goes on HomeDepot.com and orders some ceramic pots that they want to pick up in the store. The store associate goes, retrieves the pots from the shelf, and then they place them in the staging area for all store pickup orders. The customer never shows up to pick up their order, nor do they cancel the order. The pots stay in the staging area for a few days and eventually they either get lost or damaged, which results in shrink. Perhaps the worst consequence of this is the lost sale. While the pots were sitting in the staging area, a customer that actually wanted to purchase them could not purchase them. Unlike malicious drivers, operational drivers are store controllable.

If processes are followed the way they should be, then we can actually prevent shrink from occurring. For The Home Depot, just like the rest of the industry, shrink is a growing problem. In order to solve it, our leaders needed visibility into what was driving shrink and which of these drivers were store controllable. The Home Depot with 2000 stores and hundreds of thousands of SKUs per store, counting each and every SKU is a very daunting task. We cannot do that very often. More often than not, a store will undergo a physical count once a year. That gives us one data point a year per store for shrink. Shrink can have operational drivers which are process driven or malicious reasons for theft.

Having a technology stack that is able to handle the data volumes, scale up and down as you need it to, and then also processing our analysis in a timely and cost effective manner is critical to our success. The first step was defining the problem and scoping it. We set out to identify the most influential store controllable operational drivers and malicious drivers that explain shrink. Next up was figuring out what features we wanted to put through the model. We tapped into our feature repository that has hundreds of potential shrink drivers. Some of these are operational, like vendor credits. Some of these are malicious, like theft cases. Some of these are control variables, like store location or count of competitor stores in the area. This repository has been built in Google BigQuery and it has data from various sources, be it our store systems, ordering systems or third party vendor data. The metrics in the repository are updated weekly, and this is tapped into for most of the projects that the team works on that are related to shrink.

For modeling, we used Google’s Vertex AI, which is essentially Jupyter Notebooks that runs on a virtual server in the cloud. For the modeling components, we used Python and then we used BigQuery to stage our input data as well as the model outputs. The goal was to understand what drivers of shrink were. Explainability was the primary concern, and we were less focused on accuracy. We used a regression model as it provided the best insight into what drivers correlated with shrink. We went through multiple moderate model iterations. Initially we built a baseline model, which was then used to benchmark and compare subsequent models. We also built models with all variables, operational, malicious and control variables. This was done to understand everything that contributed to shrink. We also ran models with operational-only variables or malicious-only variables. This was done to understand, for example, an operational model with operational variables only would be able to pick up on operational drivers that would otherwise have been hidden by malicious variables that were more strongly related to shrink and vice versa. We brought our business partners and field leaders  along with us on the modeling journey. We kept them updated on the various model outputs and the variables that we were seeing as significant in the model. They helped us think through why certain things were popping and why some of the relationships with shrink we were seeing were the way they were.

We finally landed on a set of operational and malicious drivers that we felt good about from a modeling standpoint. We took that list, worked with our business partners and field leaders to further add the actionability lens on it. The goal was to identify operational drivers that were truly store controllable and where we could identify a clear action for the stores to undertake. When we looked at it from that lens, some of the variables were weeded out further. For example, items that are lost in transfers between one facility to another, is not something that a store can control. It’s controllable by the company, but not by the store. Applying that lens, we needed to drop that variable from the final list. In discussions with the business partners, there were a couple of variables that were not picked up by the model as significant.

The business partners felt very strongly about it and they were pretty clear on what action needed to be taken there to mitigate shrink. We ended up adding those two variables into the final list as well. We were cognizant of how many drivers would make the final list. We didn’t want too many because the stores would begin to have too many things to do. We didn’t want too few drivers as well. We had similar discussions on the malicious drivers. While malicious drivers are not typically store controllable, we did ensure that our teams felt good about the drivers that we finally picked. Next, was providing visibility into the drivers to the stores and communicating with them on what action they needed to take. We made the drivers available in our store reporting applications. We also came up with a store execution score that would allow us to track how the stores were performing with each driver. If a store was doing well, we would expect that their execution score would go up over time. All the driver data, as well as the execution scores, were staged in Google BigQuery. We used Tableau for analyzing store distributions as we were developing the execution store.

When we looked at how the execution score was distributed across our stores, we saw a normal bell curve, which was expected, with a slightly longer tail on the stores that were lower performing. We looked at how our execution score was correlated with shrink rate. As our execution score increased from 10% to 100%, the shrink rate decreased. This gave us the validation that we needed. The drivers we had picked and the execution score that we had designed would take us in the right direction. This process is a really good example of blending art and science to solve a problem. We built a model. We identified the drivers that were significantly correlated with shrink. Then we applied an operations lens on it to pick the drivers that stores could control. We provided visibility to the stores on the needed action on these drivers and designed an execution score that would allow us to track their performance over time. We did this by bringing our business partners and field leaders along with us on the journey.

The end result was better and more actionable. Including them in the process early on, ensured that they were aligned and more vested in the success of the project. The project rolled out successfully and it was received well in the field. There were some things that worked well for us and others that could have gone better. One of the things that was a major factor in the success of the project was that this project had alignment at the executive level. This was important and the right thing to do for the company. That ensured that all the teams that were involved in the project knew the priority. We were all marching towards the common goal. Since the priority was clear, anything that competed with resources with this project, we could easily say no to.

Collaboration really worked well between the teams. We all had well-defined roles. The fact that we engaged our partners early on and involved them in the driver selection process really had them vested in the success of the project. Once we had determined what the scope of the project was, we tried our best not to stray from that. We were very focused on getting a version one out and we parked any future enhancements for a version two. We had regular stand-ups with our teams as well as business partners, making sure that things were moving along and we were addressing any blockers in a timely manner. Another thing that helped us hit the ground running was our data and analytics infrastructure. Our string repository of potential drivers was already available and easy to modify. We use this for most of our shrink projects and we were able to leverage it for this driver’s analysis as well. We also have multiple environments already set up that allow us to run multiple projects in parallel.

This makes sure that the team doesn’t step on each other’s toes. We also have a well-defined enterprise. We have well-defined enterprise tools and technologies that are supported by IT teams for data engineering, developing models, scheduling and visualization. These work for the scale that we need. There was no time lost in choosing the right technology or having to build up features from scratch. We discovered that we were able to draw from methodologies that we had used on previous projects. We had done driver’s analysis in the past. We had built scoring models in the past. Though we may not have solved the exact problem, they were similar enough that we could reuse components from them. You don’t always have to reinvent the wheel. You can always draw from something that someone has worked on and use that. That helps you move a little faster.

Most of us have been in the situation where we’ve had to explain something technical to a non-technical audience. In this case, we were partnering with field leaders. They’re world-class operators. They know from experience how operational processes cause shrink, but they don’t really understand regression models. Nor do they care how many iterations we run or what the R-square was. When we would meet with them to review drivers, keeping it simple really helped. We would talk to them about the drivers that were showing up, how they were related to shrink, and then talk through the business process and try to discuss with them why or why not it was showing up in the model. We were speaking their language. They were able to partake in the conversation and we kept moving forward. There are some things that could have gone better. Initially, cross-team collaboration was off to a slow start. Once everyone was aligned that this was the number one priority, we moved full steam ahead. We rolled the drivers out in such a short period of time. The team wished that they had more time to validate the data and the metrics prior to launch.