Back in January, I had the privilege of participating in the Kellogg Biotech Case competition (link). It was the first time I’ve participated in a case competition, and although this is a touch overdue, I wanted to share my reflections on the experience, particularly my thoughts on the methodology, now that I have the benefit of hindsight.
Case Competition Rules
Case competitions are common in business school. Basically, teams are given a description of a business challenge and their goals is to prepare a presentation which analyzes the problem and recommends a solution. Teams then present to a panel of judges, who evaluate them on the their analytical rigor, thoroughness, clarity, and presentation quality.
The rules of specific competitions can vary, but for this one:
- Teams were 4-5 people each
- The case (a ~13 page description of the business challenge) was sent to the teams on Friday, January 17th, and all presentation materials had to be submitted by the following Friday, January 24th.
- The presentations were expected to be ~20 minutes long, with an extra 5 minutes to answer questions (25 minutes total)
- Although we were permitted to present in any format, PowerPoint is generally considered the best practice
The topic of this year’s case was peanut allergy treatments, as seen from the perspective of Oakdale Pharmaceuticals, a fictional company. Oakdale’s business development team was considering whether it should acquire Aimmune Therapeutics, a (real) company developing a product called Palforzia.
Palforzia is an “oral immunotherapy”, a tablet which, taken daily, reduces an allergic patient’s sensitivity to peanuts. Palforzia had completed its Phase III clinical trials, which demonstrated that is was safe and effective, and as of January 2020 was on the cusp of receiving FDA approval. In the case, the market value of the company was $2.31 billion dollars The question is whether this value was accurate.
(As an aside, this reflected the value of the company in January; if Oakdale were buying Aimmune today, they would only need $0.9 billion. Call it the COVID-19 discount).
I wanted to talk about how we approached this problem, because I think the lessons from this exercise are generally applicable to any problem where you need to build a model. Note, the approach I describe below is not what we did, but a description of what I would have wanted to do with the benefit of hindsight.
Solving the Modeling Problem
The valuation of Aimmune is a modeling problem. The company’s value is directly tied to how profitable we expect manufacturing and selling Palforzia to be. Once we model the year-to-year profits, it’s just a matter of comparing the value I derive from my model to the company’s $2.31 billion dollar sticker price.
Because the model is so important, the focus on the presentation should almost entirely be on justifying the model design and/or our choices for the input parameters.
Choosing parameters for your model
How hard it is to choose your parameters can be very different for different problems. Fortunately, I was able to help our team shortcut this process somewhat because I spent 4 years working for an expert in forecasting revenues for pharmaceutical products.
Here’s an overview of the industry standard parameters:
I should note that these aren’t hard and fast rules, and you can tweak them to fit with the specific disease and the data that is available to estimate the values of each parameters.
For example, my team decided that several of the factors above could be rolled into a single statistic: the % of patients with a peanut allergy who are prescribed an EpiPen. Because EpiPens are prescribed to basically all patients who show up to a doctor’s office and are considered at risk of a severe allergic reaction, it captures patients who seek treatment and disease severity.
I won’t go into much depth about the actual construction of the model in excel. It takes a little bit of work to set everything up, and it helps to have seen a finished model before, but at the end of the day, it’s mostly just multiplication. The simplicity of the model is one of it’s strengths– it means that most of the people who would use the model understand the parameters that were used to construct it.
The challenging part is estimating the values of your parameters. This is where you have to make some judgment calls that a skeptical audience might press you for justification, and for good reason. You’re trying to predict the future, which is not an easy thing to do. The goal is to simultaneously seem self-assured, confident in your model’s outputs, while also acknowledging the inherent uncertainty of this entire exercise.
I think a good rule of thumb, when deciding whether to include a piece of information in the presentation, is to ask yourself whether the information will help you justify one of the decisions you made when estimating one of the values.
Accounting for uncertainty
It’s a truth universally understood, but rarely acknowledged, that most of the numbers in these models are barely-educated guesses. This is doubly true in competitions like these, where the situations are partially fictional and the contestants have one week to do their analysis and no research budget.
(In the real world, pharma companies will often commission primary market research, costing tens of thousands of dollars per study, to get a marginally better estimate of their product’s future market share.)
The problem is that there are a lot of parameters, and beneath each parameter is a very deep rabbit hole of research. If you have done one of these exercises before, you know what I’m talking about. If not, I’m not sure it’s possible to exaggerate just how deep these rabbit holes can go. It’s very easy to spend a lot of time and energy trying to understand one of your parameters, only to realize that the answers are inconclusive, don’t help you support your estimates, and are to convoluted to communicate in the context of a presentation.
To avoid going down rabbit holes, it’s important to know what is good enough. When can you stop doing research? What numbers are you looking for?
Here, I want to outline my current thinking about how to answer these problems, though I confess, this is where I depart from experience and I begin to speculate.
The first is to give your Best Guess for each parameter based on an initial round of research. This is what you will plug into your first draft of the model. Sometimes, your best guess will be well supported by your research. Other times, it could be raw speculation. But you need to put something into the model.
The second is what I will call the Range of Plausibility. The Range of Plausibility, implied by its name, is the range of values that you think a parameter could plausibly take. This range is a little subjective. Maybe you’re just ballparking the estimate. Maybe the methodology that other sources use to estimate the value of the parameter have some sort of error. Maybe there are multiple sources the provide conflicting estimates. At the end of the day, it’s helpful to have an estimate in your head.
For example, my research on the rate of EpiPen use among PA patients found some conflicting estimates. One source said ~50% of patients had an EpiPen Rx; another other said 35%. As such…
- Our Best Guess, which we used in our model, was ~45% (we thought that the 50% study was a little more credible, but hedged down a little bit)
- Our Range of Plausibility was 55% to 30%.
If the Range of Plausibility is large, it may be productive to also think about a Range of Sanity. The range of sanity covers possible values that are outside your range of plausibility, but not impossible, assuming your analysis is loosely connected to reality. Put another way, if the true value of a parameter is outside your Range of Sanity, it means you’ve missed something really, really fundamental.
So you have a best guess for all of your parameters plugged into your model. You also have, in the back of your mind, a range of plausibility and a range of sanity for each of your parameters. Now what?
The answer: you run a sensitivity analysis.
A sensitivity analysis tells you what happens to your model’s output if you change each of your inputs. In more colloquial language, a sensitivity analysis answers the question “how wrong can I be about my parameters without having to change my story”.
Here, your range of plausibility and range of sanity allow you to evaluate that answer.
If a parameter can take value within its range of plausibility without substantially changing your story, you’re golden. If it can’t, that’s a sign that you should do some more research and develop a more informed opinion, and hopefully narrow your range of plausibility. For example, if two studies give conflicting estimates for the value of a parameter, dig a little deeper into the study’s methodology to see if one is more credible than the other.
The sensitivity analysis tells you where it’s important to do more research, and go little deeper down the rabbit hole.
A note on correlation between parameters
The standard sensitivity analyses that I know how to run don’t consider the interaction between multiple variables. However, there will be times when this isn’t enough because variables are correlated with each other. There are several such parameters in this case. For example, if doctors think the clinical profile of Palforzia to be outstanding, they are likely to both prescribe it to more patients (the peak market share is higher) and adopt it more quickly (the uptake curve is going to be steeper).
Here, I think the best way to handle it is to make multiple models, one with an “upside” scenario, where the correlated parameters are better than the base guess and one with a “downside” scenario, where the correlated parameters are worse than the best guess.
Are a two main challenges to actually carrying out this sort of analysis in the context of a case completion.
The first has to do with the timing. Turns out, a week is a really short amount of time to wrap your head around a novel problem and put a presentation together. This is compounded by the second challenge, which is that the work is spread across 4-5 people, all of whose understanding of the problem is in flux.
This is, as far as I can tell, the nature of the beast. But I have a two speculations about best practices.
First, start building the model as early as possible, and make sure that everyone on the team is kept up to date with changes. This way, it’s much easier to divide work by giving a discrete, concrete task to individual people. You can assign each person a group of parameters, and then they can come back with something to put back in the model. In fact, the point of this framework is to help get people on what sort of research is useful. You want your research to support the credibility of the parameters in the model.
Second, start practicing your “voice over” with rough PPT slides as soon as possible. This will help you and your team fill in missing ideas as you build the presentation. Also, hearing you and your teammates think aloud will help with team alignment.
I think it’s helpful, when you start working on a modeling problem, to have a sense what your final presentation is going to look like. For me, at least, it’s helpful to think about levels:
On the first level, you want to show the output of the model (the “so what”), and connect it to a specific, actionable recommendation. These form the titles of your slides and the main bullets in your executive summary.
On the second level, you want to demonstrate that you have thought about the right factors when designing your model. What parameters did you use? What, at a high level, did you use to estimate them. These are the supporting bullets of the slides in your presentation.
On the third and final level, you want your reasoning about each of your parameters to withstand scrutiny. This is what you put in the appendix and prepare to respond to questions about.
Most of what I’ve talked is about helping you sort the mess of information that your research turns up into the appropriate levels. In any case, this is what I got from the experience. I look forward to testing these recommendations the next time I have a modeling project.