Conjoint Analysis

The Lobster Roll Problem

Part 0: What Is Conjoint Analysis

Before we get to lobster rolls, let’s take a step back and situate conjoint analysis in the broader landscape of market research methods. Conjoint Analysis is actually a family of techniques, not a single method, and the version we’re going to work through today is a deliberately simplified entry point into that family. But we’re going to dip our toes in the conjoint waters - even though it isn’t featured in our book - because I think it is really cool that it is one of the only TRULY marketing-specific models out there. Marketing as a field didn’t pioneer regression, or ANOVA, or Bayesian methods…but conjoint is ours.

The Core Idea

All conjoint methods share the same fundamental logic: rather than asking people to evaluate product features one at a time (“how important is price to you?”), you show them complete product profiles that vary across multiple attributes and ask them to respond to those profiles as a whole. The insight is that people reveal their true preferences (including the trade-offs they’re willing to make) through their responses to realistic product descriptions, not through abstract importance ratings.

This is where the name comes from: conjoint analysis is about understanding how people evaluate attributes conjointly, as a bundle, rather than in isolation.

A Brief Tour of the Major Types

Conjoint methods differ primarily in how they ask respondents to respond to profiles. The main variants you’ll encounter in practice are:

Full-profile ratings-based conjoint is the original and conceptually simplest form. Respondents see complete product descriptions (all attributes specified) and rate each one, usually on a numeric scale. This is what we’ll be doing today. It maps directly onto regression with a continuous outcome, which is why we’re ready to make this leap.

Ranking-based conjoint works similarly but asks respondents to rank a set of profiles from most to least preferred rather than rating each one independently. Rankings can be more cognitively demanding but eliminate the “rate everything a 7” problem that sometimes plagues rating scales.

Choice-based conjoint (CBC) is really the current industry standard and the type you’re most likely to encounter in professional practice. Rather than rating or ranking, respondents are shown sets of two or three profiles and simply choose which one they’d buy (or, in some designs, choose “none of the above”). This mimics real purchase decisions more closely if you think about it. When you’re at a restaurant, you choose from a menu, you don’t rate each item on a 10-point scale. CBC requires logistic regression or more specialized models for analysis that we haven’t covered (but you can build up to it - it’s at the end of your book!), which is why we’re not using it today.

Adaptive conjoint analysis (ACA) takes things further by adjusting which profiles a respondent sees based on their previous answers. You very well might have taken a standardized test like this in the past or you may be preparing to (GMAT, anybody?). This change to the procedure allows the survey isntrument to narrow in on a particular respondents hangups more efficiently. If you’ve already indicated you don’t care about bun type, the algorithm stops asking you about buns and focuses on the attributes that actually drive your preferences. This efficiency does, however, make the analysis process quite a bit more complex.

MaxDiff (Maximum Difference Scaling) is sometimes grouped with conjoint methods, though it’s technically distinct. Rather than rating full product profiles, respondents repeatedly choose the best and worst items from small sets of individual features or concepts. It’s particularly useful for prioritizing a long list of features.

A Note on Design Complexity

Real conjoint studies involve a lot of careful upfront design work that we’re glossing over today. The profiles respondents see are constructed using experimental design principles (fractional factorial designs, D-optimal designs, etc.) to ensure that attribute effects can be estimated independently and efficiently. Commercial conjoint platforms like Sawtooth Software, Qualtrics, and Conjointly handle much of this automatically, but understanding what they’re doing matters if you decide to pursue more conjoint knowledge on your own.

Real studies also typically involve many more attributes and levels than the lobster roll example we’re about to work through as well as larger and more carefully recruited samples (this “sample” was generated by yours truly in R).

What We’re Doing Today

OK, so all that said we’re going to work through a ratings-based full-profile conjoint study, which is the simplest variant both to administer and to analyze. Our goal is to understand the core logic: how do you get from “respondents rated some product profiles” to “here’s how much each attribute matters and what the optimal product looks like”?

The answer, as you’re about to see, is regression. The same Model C vs. Model A framework we’ve been using all semester.

Let’s get into it.


Part 1: The Lobster Roll Problem

You’ve just been hired as a consultant by Claw & Order, a new food truck operation planning to serve lobster rolls along the Maine coast. The owners—a former finance professional who “escaped” to Maine and a recent marketing graduate from the Maine Business School—have a problem.

They’ve been arguing for weeks about how to make their signature lobster roll. One partner insists on warm butter, a split-top bun, and absolutely no lettuce. The other thinks a light mayo preparation with some crisp lettuce would appeal to summer customers. They’ve gone back and forth, each convinced their preferences represent what customers actually want. So they’ve started polling friends, family, and people they randomly meet around the state to ask what the “right” lobster roll is and everyone gives a different answer!

“Just ask customers what they want!” suggests one partner.

But here’s the problem: when you ask people what they want, they say “everything.” They want the freshest lobster, the best bun, the perfect sauce, AND the lowest price. Customers aren’t good at articulating trade-offs. If you ask “Is price important?” everyone says yes. If you ask “Is quality important?” everyone also says yes. These answers don’t help you design an actual product.

What Claw & Order needs is a way to understand how customers make trade-offs—how much they’re willing to sacrifice on one attribute to get more of another. That’s exactly what conjoint analysis does.

What is Conjoint Analysis?

Conjoint analysis is a survey-based method that uncovers how people value different attributes of a product or service. Instead of asking people to rate individual features in isolation, you present them with complete product profiles that vary across multiple attributes and ask them to rate (or rank, or choose among) those profiles.

The key insight is this: by observing how overall preferences change as attributes change, we can decompose overall preferences into the contribution of each attribute. We can figure out how much of someone’s overall liking of a lobster roll comes from the butter, how much from the bun, how much from the lettuce (or lack thereof), and so on.

This decomposition gives us what we call part-worth utilities—the utility (or value, or contribution to preference) associated with each level of each attribute. If we know the part-worths, we can:

  1. Identify which attributes matter most to customers
  2. Design the “optimal” product (the combination with highest total utility)
  3. Predict how customers would respond to new product configurations
  4. Understand how different customer segments value attributes differently

The Attributes

After some market research, Claw & Order has identified five key attributes that vary across lobster rolls in the market:

Attribute Levels Description
Binder Butter, Mayo Warm butter vs. mayo-based
Sauce Amount Light, Heavy How generously dressed the lobster meat is
Lettuce Yes, No Whether a bed of lettuce sits under the lobster
Bun Split-top, Regular New England split-top hot dog bun vs. regular hinged bun
Price $18, $24, $30 Price point for the roll

With 2 × 2 × 2 × 2 × 3 = 48 possible combinations, there are 48 distinct lobster rolls they could theoretically offer. Obviously, customers can’t rate all 48. We’ll deal with that shortly.

The Survey

Claw & Order fielded a survey to 150 respondents, asking each to rate 12 different lobster roll profiles on a 1-10 scale (“How appealing is this lobster roll to you?”). The profiles were selected using a technique called fractional factorial design, which chooses a subset of profiles that allows us to estimate the effect of each attribute without showing every possible combination.

Importantly, they also recorded whether each respondent was:

  • A Maine native (born and raised in Maine)
  • A transplant (currently lives in Maine but born elsewhere)
  • A tourist (visiting Maine)

This segmentation variable will prove useful later.


Part 2: From Profiles to Data

Let’s look at what the data actually look like. First, load the dataset:

lobster <- read.csv("Datasets/lobster_roll_conjoint.csv")
head(lobster)
str(lobster)

You should see something like this:

respondent_id segment profile binder sauce lettuce bun price rating
1 native 1 Butter Light No Split 18 5
1 native 2 Mayo Heavy Yes Regular 24 7
1 native 3 Butter Heavy No Split 18 4

Each row represents one respondent rating one profile. Since we have 150 respondents each rating 12 profiles, we have 1,800 total observations.

Notice that the attributes are categorical (except price, which we’ll treat as continuous for now - in doing so, we’re assuming that each additional dollar hurts preference by roughly the same amount over the tested range). To use these in a regression model, we need to convert them to numeric codes. This is where dummy coding comes in.

Dummy Coding Review

You’ve already seen dummy coding in previous weeks when dealing with categorical predictors. The idea is simple: for a categorical variable with k levels, we create k - 1 binary (0/1) indicator variables. The omitted level becomes the reference category (because it is coded as 0 - remember the magic number 0?).

For our lobster roll attributes:

Attribute Reference Level Dummy Variable(s)
Binder Butter binder_mayo (1 if Mayo, 0 if Butter)
Sauce Light sauce_heavy (1 if Heavy, 0 if Light)
Lettuce No lettuce_yes (1 if Yes, 0 if No)
Bun Regular bun_split (1 if Split-top, 0 if Regular)
Price (continuous) Just use the numeric value

Let’s create these dummy variables:

lobster$binder_mayo <- ifelse(lobster$binder == "Mayo", 1, 0)
lobster$sauce_heavy <- ifelse(lobster$sauce == "Heavy", 1, 0)
lobster$lettuce_yes <- ifelse(lobster$lettuce == "Yes", 1, 0)
lobster$bun_split <- ifelse(lobster$bun == "Split", 1, 0)
NoteWhy These Reference Levels?

The choice of reference level is arbitrary statistically—you’ll get equivalent overall model fit regardless of which level you omit. But it affects interpretation. I’ve chosen reference levels that represent a “baseline” lobster roll (it’s also the correct choice because it is my preference - fight me): butter-based, lightly dressed, no lettuce, regular bun. The coefficients will then tell us how much utility changes when we switch away from this baseline.


Part 3: Conjoint as Model Comparison

Here’s where things should start feeling familiar. We’re going to estimate part-worth utilities using… regression. Specifically, we’ll compare two models:

Model C (Compact): Predict the rating using only the overall mean.

\[\hat{Y}_i = \bar{Y}\]

This is our baseline—a model that says “everyone rates every lobster roll the same, regardless of its attributes.” Obviously this is wrong, but it establishes the error we’re starting with.

Model A (Augmented): Predict the rating using the attribute levels.

\[\hat{Y}_i = b_0 + b_1(\text{Mayo}) + b_2(\text{Heavy}) + b_3(\text{Lettuce}) + b_4(\text{Split}) + b_5(\text{Price})\]

This model says “preferences depend on what’s in the lobster roll.” The coefficients (\(b_1, b_2\), etc.) are our part-worth utilities—they tell us how much each attribute level contributes to overall preference.

Quick note: For simplicity, we’ll analyze these stacked ratings with ordinary regression. In a more advanced approach to conjoint, we would often account for the fact that each respondent rated multiple profiles.

Fitting the Models

Let’s fit both:

# Model C: Mean only
model_c <- lm(rating ~ 1, data = lobster)

# Model A: Attributes
model_a <- lm(rating ~ binder_mayo + sauce_heavy + lettuce_yes + bun_split + price, 
              data = lobster)

Now let’s compare them using our familiar tools:

# Get SSEs
SSE_C <- deviance(model_c)
SSE_A <- deviance(model_a)

# Calculate PRE
PRE <- (SSE_C - SSE_A) / SSE_C
PRE

Your PRE should be somewhere in the range of 0.13-0.18. This tells you that knowing the attributes of a lobster roll reduces our prediction error by about 13-18% compared to just guessing the mean rating for everyone.

Is that a lot? For survey data with lots of individual variation in preferences, that’s actually pretty solid. Remember, different people genuinely like different things—we wouldn’t expect attributes alone to explain everything.

Let’s look at the full model output:

summary(model_a)

Interpreting the Output

You should see something like:

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)    5.7008     0.1495  38.130  < 2e-16 ***
binder_mayo    0.5801     0.0606   9.574  < 2e-16 ***
sauce_heavy   -0.0386     0.0634  -0.608   0.5431    
lettuce_yes   -0.3630     0.0682  -5.323  1.1e-07 ***
bun_split      0.4202     0.0581   7.235  6.4e-13 ***
price         -0.0429     0.0074  -5.811  7.2e-09 ***

Let’s interpret each coefficient:

Coefficient Estimate Interpretation
(Intercept) 5.70 Predicted rating for the reference lobster roll (butter, light, no lettuce, regular bun) at $0. The $0 price is meaningless, so we’ll fix this with centering later.
binder_mayo +0.58 Switching from butter to mayo increases the expected rating by 0.58 points, holding all else constant. Mayo is preferred.
sauce_heavy −0.04 Heavy sauce decreases the expected rating by 0.04 points compared to light sauce. The effect is tiny and not statistically significant—sauce amount doesn’t much matter to the average respondent.
lettuce_yes −0.36 Adding lettuce decreases the expected rating by 0.36 points. No lettuce is preferred.
bun_split +0.42 A split-top bun increases the expected rating by 0.42 points compared to a regular bun. Split-top is preferred.
price −0.04 Each additional dollar decreases the expected rating by 0.04 points. Higher prices are less preferred (no surprise).

These coefficients are the part-worth utilities. They tell us the “value” each attribute level contributes to overall preference, measured in rating-scale points.

A Quick Sanity Check

Do these results make sense? Let’s think:

  • Mayo > Butter: Traditional Maine-style lobster rolls are mayo-based. Makes sense.
  • Light > Heavy sauce: Lets the lobster shine. Makes sense.
  • No lettuce > Lettuce: Purists hate lettuce on a lobster roll. Makes sense.
  • Split-top > Regular bun: The traditional New England style. Makes sense.
  • Lower price > Higher price: Obviously.

The signs are all in the (or at least my) expected direction - that’s reassuring.


Part 4: Making the Intercept Meaningful

You may have noticed that the intercept (5.70) is the predicted rating at a price of $0. That’s not meaningful—nobody’s offering free lobster rolls.

Let’s center price at its mean so the intercept represents the predicted rating for the reference lobster roll at the average price:

lobster$price_centered <- lobster$price - mean(lobster$price)

model_a_centered <- lm(rating ~ binder_mayo + sauce_heavy + lettuce_yes + 
                                 bun_split + price_centered, 
                        data = lobster)
summary(model_a_centered)

Now the intercept represents the predicted rating for a butter-based, lightly sauced, no-lettuce, regular-bun lobster roll at $24 (the average price). This is much more interpretable.

Note that the coefficient on price doesn’t change—centering only affects the intercept, not the slopes. You learned this in Week 8!


Part 5: Attribute Importance

We now have part-worth utilities, but Claw & Order wants to know: which attribute matters most?

This isn’t immediately obvious from the raw coefficients because attributes have different scales: - Binder, sauce, lettuce, and bun are all binary (0/1) - Price ranges from $18 to $30 (a range of 12)

To compare “importance” across attributes, we need to put them on a common scale. The standard approach is:

  1. For each attribute, calculate the range of part-worths (highest minus lowest)
  2. Sum the ranges across all attributes
  3. Express each attribute’s range as a percentage of the total

Calculating Importance

For binary attributes, the “range” is simply the absolute value of the coefficient (since switching from 0 to 1 captures the full range).

For price, the range in part-worth is: coefficient × (max - min) = −0.04 × (30 - 18) = −0.51. We take the absolute value: 0.51.

# Extract coefficients
coefs <- coef(model_a)

# Calculate ranges (absolute values for binary attributes)
range_binder <- abs(coefs["binder_mayo"])
range_sauce <- abs(coefs["sauce_heavy"])
range_lettuce <- abs(coefs["lettuce_yes"])
range_bun <- abs(coefs["bun_split"])
range_price <- abs(coefs["price"]) * (30 - 18)  # coefficient × price range

# Total range
total_range <- range_binder + range_sauce + range_lettuce + range_bun + range_price

# Importance (percentage)
importance <- c(
  Binder = range_binder / total_range * 100,
  Sauce = range_sauce / total_range * 100,
  Lettuce = range_lettuce / total_range * 100,
  Bun = range_bun / total_range * 100,
  Price = range_price / total_range * 100
)

importance

Your importance percentages should look something like:

Attribute Importance
Binder ~30%
Price ~27%
Bun ~22%
Lettuce ~19%
Sauce ~2%

This tells Claw & Order: the binder (mayo vs. butter) is the most important driver of preference, followed by price and bun type. Sauce amount barely registers.

Visualizing Importance

barplot(sort(importance, decreasing = TRUE),
        main = "Attribute Importance",
        ylab = "Importance (%)",
        col = "steelblue",
        las = 2)

Part 6: The Optimal Lobster Roll

Given our part-worths, we can now answer the question: what’s the best lobster roll to satisfy the most people in this market?

To maximize the predicted rating, we want to choose the level of each attribute that has the highest part-worth:

Attribute Optimal Level Reasoning
Binder Mayo Positive coefficient (+0.58)
Sauce Light Negative coefficient for heavy (−0.04)
Lettuce No Negative coefficient for yes (−0.36)
Bun Split-top Positive coefficient (+0.42)
Price $18 Negative coefficient per dollar (−0.04)

The “optimal” lobster roll is: Mayo, light sauce, no lettuce, split-top bun, $18.

Let’s calculate its predicted rating:

# Using the centered model
# Reference profile at mean price: butter, light, no lettuce, regular bun = intercept
# Optimal: add binder_mayo, bun_split, subtract price effect for going from $24 to $18

optimal_rating <- coef(model_a_centered)["(Intercept)"] +
                  coef(model_a_centered)["binder_mayo"] +
                  coef(model_a_centered)["bun_split"] +
                  coef(model_a_centered)["price_centered"] * (18 - 24)

optimal_rating
NoteBut Wait…

The “optimal” lobster roll at $18 is also the cheapest. What if Claw & Order can’t make money at $18?

This is where conjoint can start to get really fun. You can ask: “What’s the optimal roll at $24?” or “How much does preference drop if we raise the price to $30?”

The answer is in the coefficients. Each $6 increase costs about 0.26 rating points (0.04 × 6). Is that worth the extra margin? Conjoint gives you the numbers to make that an informed decision.


Part 7: Segment Differences

So far we’ve estimated part-worths for all respondents pooled together. But Claw & Order suspects that Maine natives, transplants, and tourists might have different preferences.

Let’s find out.

Approach 1: Separate Models by Segment

The simplest approach is to estimate separate models for each segment:

# Subset the data
natives <- subset(lobster, segment == "native")
transplants <- subset(lobster, segment == "transplant")
tourists <- subset(lobster, segment == "tourist")

# Fit separate models
model_native <- lm(rating ~ binder_mayo + sauce_heavy + lettuce_yes + 
                            bun_split + price_centered, data = natives)
model_transplant <- lm(rating ~ binder_mayo + sauce_heavy + lettuce_yes + 
                                bun_split + price_centered, data = transplants)
model_tourist <- lm(rating ~ binder_mayo + sauce_heavy + lettuce_yes + 
                             bun_split + price_centered, data = tourists)

# Compare coefficients
coef(model_native)
coef(model_transplant)
coef(model_tourist)

The segments should show notably different preferences:

Attribute Natives Transplants Tourists
Mayo (vs. butter) Strong + Moderate + Slight −
Heavy sauce +
Lettuce Strong − Moderate − +
Split-top bun Strong + Moderate + Slight −
Price sensitivity Moderate Moderate Very low

Natives are traditionalists: mayo, no lettuce, split-top bun, price-sensitive.

Tourists are… different: they actually prefer butter, like heavier sauce, don’t mind lettuce at all, are indifferent about the bun, and are much less price-sensitive (vacation mode).

Transplants are in between, having adopted many Maine preferences but not fully converted.

Approach 2: Interaction Model

Rather than fitting separate models, we can use interactions to test whether segment differences are statistically significant:

model_interaction <- lm(rating ~ (binder_mayo + sauce_heavy + lettuce_yes + 
                                   bun_split + price_centered) * segment, 
                        data = lobster)
summary(model_interaction)

This model includes main effects for each attribute and interaction terms between each attribute and segment. The interaction terms tell us whether the effect of an attribute (e.g., mayo vs. butter) differs significantly across segments.

NoteConnecting to Previous Weeks

This is exactly the moderation framework from Week 11! Segment is moderating the effect of product attributes on preference. The interaction terms tell us: “Does the mayo effect depend on who’s eating the lobster roll?”


Part 8: Putting It All Together — The Recommendation

Claw & Order needs a recommendation. Based on your analysis, here’s what you might tell them:

Key Findings:

  1. Most important attributes: Binder (mayo vs. butter) is the single biggest driver of preference at ~30%, with price, bun, and lettuce all playing meaningful roles. Sauce amount barely matters.

  2. Overall preferences favor traditional Maine style: Mayo, light sauce, no lettuce, split-top bun. This configuration maximizes appeal across all respondents.

  3. But segments differ meaningfully:

    • Maine natives strongly prefer the traditional style; transplants are pretty happy to accept it, too
    • Tourists have different preferences — they actually lean toward butter, like heavier sauce, and don’t mind lettuce
  4. Strategic implication: If Claw & Order operates in a tourist-heavy location (Bar Harbor in July), they might consider offering a “visitor-friendly” option alongside the traditional roll. If they’re serving mostly locals (Portland in October), stick with tradition.

  5. Price sensitivity: Tourists are much less price-sensitive than locals. There may be an opportunity for premium pricing in tourist locations.


Part 9: Your Turn

Now it’s your turn to practice. Complete the following tasks:

Task 1: Verify the PRE

Calculate the PRE comparing Model C to Model A and verify it matches what we discussed earlier. Also calculate Cohen’s f² and interpret the effect size.

SSE_C <- deviance(model_c)
SSE_A <- deviance(model_a)
PRE <- (SSE_C - SSE_A) / SSE_C
PRE

# Cohen's f²
f2 <- PRE / (1 - PRE)
f2

The PRE should be around 0.15. The f² around 0.18, which is a “medium” effect by conventional benchmarks (small = 0.02, medium = 0.15, large = 0.35). In survey research, this indicates that product attributes explain a meaningful portion of preference variation.

Task 2: Alternative Reference Category

Re-run the model using Mayo as the reference category for binder (instead of Butter). Verify that: - The model fit (R², PRE) is identical - Only the interpretation of the coefficient changes

lobster$binder_butter <- ifelse(lobster$binder == "Butter", 1, 0)

model_a_alt <- lm(rating ~ binder_butter + sauce_heavy + lettuce_yes + 
                           bun_split + price_centered, data = lobster)
summary(model_a_alt)

The coefficient on binder_butter should be the negative of the original binder_mayo coefficient (e.g., −0.58 instead of +0.58). The R² and all other model statistics should be identical.

Task 3: Calculate Segment-Specific Importance

Calculate attribute importance separately for natives and tourists. Which attribute shows the biggest difference in importance between these segments?

# Function to calculate importance
calc_importance <- function(model, price_range = 12) {
  coefs <- coef(model)
  ranges <- c(
    Binder = abs(coefs["binder_mayo"]),
    Sauce = abs(coefs["sauce_heavy"]),
    Lettuce = abs(coefs["lettuce_yes"]),
    Bun = abs(coefs["bun_split"]),
    Price = abs(coefs["price_centered"]) * price_range
  )
  100 * ranges / sum(ranges)
}

importance_native <- calc_importance(model_native)
importance_tourist <- calc_importance(model_tourist)

# Compare
rbind(Native = importance_native, Tourist = importance_tourist)

You should find that Lettuce shows one of the biggest differences—it’s relatively important (and strongly negative) for natives, but positive for tourists. Bun also differs notably: very important to natives, almost irrelevant to tourists.

Task 4: The “Tourist Special”

Based on the tourist segment model, design the optimal lobster roll for tourists. How does it differ from the overall optimal?

For tourists, the optimal configuration would be: - Binder: Butter (tourists have a slight preference for butter over mayo) - Sauce: Heavy (positive coefficient for tourists) - Lettuce: Yes (positive coefficient for tourists!) - Bun: Doesn’t matter much (near-neutral coefficient) - Price: Higher prices hurt less, but still prefer lower

The “Tourist Special” is essentially the opposite of the traditional Maine lobster roll. This is a real finding that would surprise many Maine purists—but it’s what the data show.

Whether Claw & Order should actually offer this depends on brand positioning. If they want to be seen as “authentically Maine,” putting lettuce on a lobster roll might damage their credibility with locals, even if tourists like it.


Summary

In this exercise, you learned:

  1. What conjoint analysis is: A method for decomposing overall preferences into part-worth utilities for individual attributes.

  2. In this stripped down version, conjoint is regression: We’re comparing Model C (mean-only) to Model A (mean + attribute effects), just like we’ve been doing all semester.

  3. With this simple version of conjoint, part-worth utilities are regression coefficients: They tell us how much each attribute level contributes to overall preference.

  4. Attribute importance is calculated from the range of part-worths within each attribute, expressed as a percentage of total range.

  5. Segments can be incorporated using separate models or interaction terms—connecting to our earlier work on moderation.

  6. The “optimal” product is the one that maximizes predicted utility, but strategic considerations (price, brand positioning, target market) also matter.

Conjoint analysis is one of the most widely-used quantitative techniques in marketing research. Companies use it to design products, set prices, forecast market share, and understand competitive positioning. Now you know how it works under the hood—and you can see it’s built on the same model comparison foundation we’ve been using all semester.