How to Pass the McKinsey Solve Game in 2026: A Complete Guide
A complete strategy guide for passing McKinsey Solve in 2026 — covering Red Rock Study, Sea Wolf Game, time management, and the scoring threshold.

What It Takes to Pass McKinsey Solve in 2026
If you're applying to McKinsey in 2026, the McKinsey Solve Game (formerly known as the Imbellus assessment) isn't just an early screening step — it's one of the most decisive filters in the entire recruitment process. A strong CV, excellent grades, and solid case interview preparation won't help you if you don't clear Solve first. You simply won't be invited to interviews.
Many candidates fail not because they lack intelligence, but because they misunderstand what Solve actually measures. They prepare for it like a video game or a logic puzzle when it's really a behavioral and cognitive assessment designed to evaluate consulting-style thinking.
The good news: McKinsey Solve is highly predictable once you understand the scoring mechanics and the specific skills McKinsey is testing. This guide walks through exactly how to pass under the current format — including what you need to know about McKinsey's newest addition, the Sustainable Future Lab.
2026 Format: Two Core Games — Plus a Potential Third
The McKinsey Solve assessment has been streamlined since its earlier three-game era. The standard format now consists of two scenarios:
The Sea Wolf Game tests route optimization, prediction, and decision-making under pressure in a marine environment. The Red Rock Study tests prioritization, data filtering, and structured resource allocation in a geological context.
The Ecosystem Simulation is no longer part of the standard assessment. If your invitation email indicates a 65-minute test window, you'll face only Sea Wolf and Red Rock.
However, there's a significant development: starting in early 2026, some candidates — particularly those applying to offices in Germany, the Middle East, and other select regions — are being assigned an 85-minute test window that includes a third module called the Sustainable Future Lab. This isn't a universal rollout yet, but the trend is clear. McKinsey is expanding how it evaluates candidates, and the Sustainable Future Lab appears to be in active testing across a growing number of offices.
If your test email says 65 minutes, prepare for two games. If it says 85 minutes, you need to be ready for all three — including the Sustainable Future Lab. Either way, a weak performance on any assigned game is very difficult to offset with the others.
What McKinsey Actually Measures
McKinsey Solve is not a test of gaming skill, memorization, trick solutions, or speed reading. It was designed with input from data scientists and cognitive psychologists to evaluate specific decision-making competencies.
The assessment measures prioritization under uncertainty, consistency of decision logic, quality of trade-off decisions, structured top-down thinking, performance under time pressure, risk calibration, and system-level reasoning. Every click, revision, and decision sequence is tracked and scored.
This distinction matters for preparation. If you approach Solve as a game to beat, you'll almost certainly underperform. If you approach it as a consulting-style decision test — where structured thinking and consistent logic matter more than raw speed or perfect answers — you dramatically increase your chances of passing.
How to Pass the Red Rock Study
The Red Rock Study is where most candidates lose structure and time. The scenario deliberately overwhelms you with text, pseudo-data, and contextual noise to test one core consulting skill: your ability to separate signal from noise.
Build a Top-Down Structure First
Before analyzing any details, spend the first 60 seconds defining the primary objective, the hard constraints, the available resources, and the implicit scoring logic. This prevents reactive, unstructured decision-making and gives you a filter for every data point you encounter afterward.
Candidates who skip this step typically spend their first 5–8 minutes reading everything with equal attention — and then run out of time in the later decision stages where points are actually scored.
Scan, Don't Read
Most Red Rock text is intentionally irrelevant. Focus exclusively on numbers, constraints, risks, and trade-offs. Background narrative, descriptive filler, and atmospheric detail that doesn't contain actionable data should be skipped entirely. This discipline alone saves 2–3 minutes — a significant margin on a ~35-minute test.
Apply a Simple, Repeatable Scoring Model
Use a consistent framework for every decision: Expected Value – Cost – Risk Adjustment = Decision Score. The model doesn't need to be precise. It needs to be consistent. Relative comparison is what matters — is Option A clearly better than Option B? If yes, commit and move on. Candidates who switch analytical approaches mid-scenario score lower on process efficiency, even when their final answers are decent.
Eliminate Dominated Options Immediately
If an option is riskier, more expensive, and offers no compensating upside compared to an alternative, remove it from consideration instantly. Don't analyze it further. Removing even two dominated options early simplifies your entire decision tree and frees time for the choices that actually matter.
Maintain Logical Consistency Across Decisions
McKinsey's scoring algorithm penalizes inconsistency far more than it penalizes small mistakes. A solid, coherent strategy that you apply uniformly across all decision points beats a theoretically optimal approach that you apply erratically. If your first three decisions follow one logic and your fourth decision contradicts it, the algorithm flags the inconsistency — regardless of whether decision four was technically "better."
How to Pass the Sea Wolf Game
The Sea Wolf Game is the most time-pressured and technically demanding part of Solve. Route options grow exponentially, fuel and distance penalties compound, environmental variables shift, and mistakes are costly and difficult to recover from. Without a framework, failure is almost guaranteed.
Use a Stable Optimization Framework
Your mental algorithm needs to be simple, repeatable, and robust under time pressure. A reliable sequence: identify all feasible destinations, eliminate high-risk and high-cost routes, compare expected outcomes among the remaining options, optimize for fuel versus distance trade-offs, and commit decisively.
The key word is "commit." Candidates who revisit and revise route decisions repeatedly score lower on process efficiency even when their final route is similar to a candidate who chose it the first time.
Recognize Patterns Instead of Reacting
Sea Wolf is not random. Environmental behavior follows identifiable patterns — currents, temperature shifts, species movements. Candidates who learn to recognize these patterns and anticipate changes outperform those who react to each new data point as if it were unpredictable. This is a skill that improves dramatically with practice, which is why the Sea Wolf simulation is one of the most effective preparation tools available.
Avoid Perfectionism
You don't have time for exact calculations in Sea Wolf. The ~35-minute window demands fast, structured decisions — not optimal ones. Good-enough optimization, applied consistently across every decision point, reliably outperforms candidates who spend three minutes finding the mathematically perfect route for one decision and then rush through the remaining five.
Train With Realistic Models
High Sea Wolf scores come from pattern familiarity, not luck. Many top-performing candidates use predictive models or solvers — such as the Sea Wolf Solver — to internalize stable optimization logic before the real test. The solver doesn't replace your thinking on test day. It teaches you which variables matter most and which trade-offs consistently produce the best outcomes, so your instincts are calibrated before you sit down for the actual assessment.
How to Handle the Sustainable Future Lab
The Sustainable Future Lab is the newest addition to McKinsey Solve, and it represents a fundamentally different type of challenge compared to Sea Wolf and Red Rock. Where those games test analytical and quantitative reasoning, the Sustainable Future Lab evaluates behavioral judgment and interpersonal decision-making — skills that are harder to quantify but central to how McKinsey consultants actually operate.
What the Sustainable Future Lab Tests
Based on candidate reports from early 2026, the Sustainable Future Lab functions as a situational judgment module embedded within the Solve assessment. Rather than crunching numbers or optimizing routes, you navigate realistic consulting team scenarios where you must make decisions about how to handle ambiguous interpersonal situations — stakeholder disagreements, resource conflicts, team dynamics, and priority trade-offs that don't have a single "right" answer.
This aligns with a broader trend: McKinsey is increasingly interested in evaluating not just how well you think but how you behave when the problem involves people, not data.
Why McKinsey Added a Behavioral Module
The traditional two-game format (Sea Wolf + Red Rock) does an excellent job measuring analytical reasoning, structured thinking, and optimization under constraints. But consulting isn't purely analytical. Engagements involve navigating client relationships, resolving team disagreements, prioritizing competing stakeholder interests, and making judgment calls without complete information.
The Sustainable Future Lab fills that gap. Think of it as McKinsey's way of testing soft skills at scale — before the interview stage — through a format that's more resistant to gaming than standard personality questionnaires.
How to Approach Sustainable Future Lab Decisions
Since this module resembles a situational judgment test, the preparation approach differs from Sea Wolf and Red Rock:
Think like a McKinsey engagement manager, not a candidate. When facing a team conflict or stakeholder disagreement scenario, ask yourself: "What would a senior consultant do here?" McKinsey values collaborative problem-solving, structured communication, and evidence-based decision-making — even in interpersonal situations.
Avoid extreme responses. In situational judgment formats, the highest-scoring choices typically balance assertiveness with collaboration. Going fully passive ("let the team decide") or fully aggressive ("override the disagreement") both score poorly. The best responses usually acknowledge competing perspectives, identify the core issue, and propose a structured path forward.
Prioritize team effectiveness over individual heroics. McKinsey's culture rewards collective impact. When a scenario offers you a choice between showcasing your own analysis and enabling the team to reach a better answer together, the team-oriented option is almost always stronger.
Be consistent in your values. Just like the analytical games, the Sustainable Future Lab likely tracks consistency across your responses. If you prioritize data-driven decisions in one scenario but defer to seniority in another without a clear reason, the algorithm may flag the inconsistency. Pick a coherent leadership philosophy and apply it uniformly.
What We Don't Know Yet
The Sustainable Future Lab is still in limited rollout. Details about exact scoring mechanics, the number of scenarios, and how heavily it weights against Sea Wolf and Red Rock are still emerging. We're actively tracking candidate reports and will update our Sustainable Future Lab guide as more data becomes available.
What's clear is that candidates who encounter this module need to treat it seriously. It's not a throwaway section — it's an additional scoring dimension that McKinsey is deliberately adding to the assessment.
Time Management: The Hidden Passing Factor
Most candidates don't fail Solve because they can't think. They fail because they run out of time, over-analyze, second-guess decisions, or chase perfection at the expense of completion.
In Red Rock, the first 60 seconds should be spent structuring — defining the objective, constraints, and prioritization hierarchy. This upfront investment saves several minutes of unfocused analysis later. Candidates who structure first typically finish with 3–5 minutes of buffer time. Candidates who dive straight into data typically don't finish at all.
In Sea Wolf, commit early. Late changes destroy both timing and score stability. Every revision is tracked, and the time spent reconsidering a decision you've already made is time you can't spend on the next one. A good decision made quickly scores better than a great decision made after three revisions.
In the Sustainable Future Lab (if assigned), don't overthink individual scenarios. Situational judgment questions reward confident, values-consistent answers. Spending four minutes agonizing over the "perfect" response to a team dynamics question is counterproductive. Read the scenario, identify the core tension, choose the response that best reflects structured and collaborative thinking, and move on.
The combined effect is significant. Time management alone separates candidates in the top 25% from candidates in the 40th–60th percentile range — even when their analytical ability is comparable.
What Does a Passing Score Look Like?
McKinsey doesn't publish official passing thresholds, but candidate outcome data across multiple cycles reveals a consistent pattern:
Percentile Range | Likely Outcome |
|---|---|
Top 10–15% | Very high interview probability regardless of office competitiveness |
Top 25–30% | Strong interview probability at most offices; may depend on CV strength at the most competitive locations |
40th–60th percentile | Interview depends heavily on CV, referrals, and office-specific demand |
Below threshold | Automatic rejection regardless of CV quality |
Passing Solve means clearing McKinsey's internal benchmark — not being perfect. The assessment is percentile-based, so your score is relative to everyone else who has taken it. Aiming for the top 15% gives you a comfortable buffer at any office worldwide.
With the addition of the Sustainable Future Lab for some candidates, the composite scoring may shift. If you're assigned three games instead of two, each one contributes to your overall percentile — meaning there's less room to compensate for a weak module with a strong one.
The 8 Principles That Consistently Lead to a Pass
These aren't theoretical. They're the common patterns we see across candidates who pass Solve and advance to interviews.
1. Top-down structure. Define the problem before engaging with the data. In Red Rock, this means identifying the objective and constraints first. In Sea Wolf, it means establishing your optimization framework before evaluating routes.
2. Correct prioritization. Survival and feasibility always come before efficiency optimization. Optional objectives are last — always.
3. Efficient data filtering. Read selectively. If a piece of information doesn't contain a number, a constraint, or a trade-off, skip it.
4. Consistent logic. Apply the same decision framework across every choice. McKinsey's algorithm rewards coherent strategies over individually optimal but disconnected decisions.
5. Simple frameworks. The candidates who pass use straightforward mental models that work under pressure. Complex multi-variable analysis breaks down when the clock is running.
6. Time discipline. Structure first, commit decisively, don't revise unless new information genuinely changes the calculus.
7. Behavioral judgment. For candidates facing the Sustainable Future Lab: think collaboratively, respond consistently, and avoid extremes. McKinsey is testing whether your interpersonal instincts align with how their consultants actually work.
8. Pre-test practice. Candidates who complete 5–10 timed simulation runs before test day consistently outperform those who rely on reading strategy guides alone. The McKinsey Solve simulation provides the most realistic practice environment for building this readiness.
Frequently Asked Questions
How hard is it to pass McKinsey Solve?
The estimated pass rate is roughly 25–30% of all candidates who take the assessment, meaning approximately 70–75% are filtered out at this stage. That makes Solve one of the most selective steps in McKinsey's entire hiring process. However, the difficulty isn't about raw intelligence — it's about structured thinking under time pressure, which is a trainable skill.
Can I retake McKinsey Solve if I fail?
McKinsey typically enforces a waiting period of approximately two years before you can reapply and retake the Solve assessment. Some offices may have slightly different policies, but you should treat your attempt as a single-shot opportunity and prepare accordingly.
How long is the McKinsey Solve assessment?
The standard two-game format (Sea Wolf + Red Rock) runs approximately 65 minutes of active testing time, plus additional time for instructions and transitions. If your invitation indicates an 85-minute window, you'll also face the Sustainable Future Lab as a third module. Check your test email carefully to know which format you're getting.
Does McKinsey Solve replace the case interview?
No. Solve is a screening step that determines whether you advance to interviews. Candidates who pass Solve still face multiple rounds of case interviews. Solve and case interviews test complementary but different skills — Solve evaluates pattern recognition and decision-making under pressure, while cases test structured problem-solving, communication, and client interaction.
Is there a way to practice for McKinsey Solve?
Yes. While McKinsey doesn't release practice versions of the actual assessment, simulation tools that replicate the format and cognitive demands of both games are the most effective preparation method. The Sea Wolf simulation and full Solve simulation at SeaWolfSolver.com provide timed practice under realistic conditions, and the Sea Wolf Solver helps you internalize optimal decision-making patterns before test day.
What is the Sustainable Future Lab in McKinsey Solve?
The Sustainable Future Lab is a newer module that McKinsey began rolling out in select regions in early 2026. Unlike Sea Wolf and Red Rock, it functions as a behavioral and situational judgment assessment — testing how you navigate team dynamics, stakeholder conflicts, and interpersonal decisions in a consulting context. It's not yet part of every candidate's assessment, but its presence is expanding. If your test email indicates an 85-minute window, you should expect to encounter it.
What's more important — Sea Wolf, Red Rock, or Sustainable Future Lab?
No game is officially weighted more heavily than the others. All assigned games contribute to your composite percentile score, and a poor performance on any one of them is very difficult to offset. Your best strategy is to prepare equally for every scenario you might face rather than betting on one.
Ready to build the structured thinking and optimization instincts you need for McKinsey Solve? The Sea Wolf Solver helps you master the decision patterns behind the Sea Wolf Game, and the McKinsey Solve Simulation gives you timed practice across both core games. Start preparing with the tools that top-performing candidates use.



