Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Asking Good Questions

Good analysis starts long before a query is written or a dashboard is opened. It starts with the quality of the question. A weak question produces noise, wasted effort, and misleading outputs. A strong question creates alignment, narrows the scope, clarifies decisions, and makes useful analysis possible.

New analysts often assume their job begins with data. In practice, it begins with ambiguity. Stakeholders rarely arrive with a perfectly framed analytical problem. They bring symptoms, pressure, assumptions, opinions, and requests shaped by their own incentives. The analyst’s role is not merely to answer what was asked, but to uncover what should be answered.

Asking good questions is therefore not a soft skill adjacent to analytics. It is a core analytical capability.


Why good questions matter

A strong question does several things at once:

  • It connects analysis to a real decision.
  • It defines what success looks like.
  • It reduces unnecessary work.
  • It reveals assumptions that might otherwise go unchallenged.
  • It prevents analysts from producing technically correct but practically useless outputs.

Poorly framed requests often sound reasonable:

  • “Why are sales down?”
  • “Can you build a dashboard for this?”
  • “Which customers are best?”
  • “Can you analyze churn?”
  • “Did our campaign work?”

Each of these contains hidden ambiguity. What period? Which segment? What metric? Compared with what baseline? For what decision? Under what constraints? Without clarification, the analyst is left to guess. Guessing creates risk.

The goal is not to interrogate stakeholders for the sake of rigor. The goal is to convert vague demand into a decision-ready analytical problem.


Business questions vs data questions

One of the most useful distinctions in analytics is the difference between a business question and a data question.

Business questions

A business question is about a goal, choice, or outcome. It reflects what the organization wants to understand or decide.

Examples:

  • Why did revenue decline in the enterprise segment last quarter?
  • Which channels should we invest in next month?
  • Are customers adopting the new onboarding flow?
  • What is driving support ticket volume?
  • Should we expand this product feature to all users?

Business questions are usually stated in the language of operations, growth, cost, risk, users, or strategy.

Data questions

A data question translates the business question into something observable and measurable. It specifies metrics, dimensions, comparisons, and methods.

Examples:

  • How did enterprise revenue in Q1 compare with Q4 by region, account manager, and product line?
  • What is the CAC, conversion rate, and retention by acquisition channel over the last 90 days?
  • What percentage of new users completed each onboarding step before and after the redesign?
  • How has support ticket volume changed by issue category, customer tier, and release date?
  • What is the difference in activation, retention, and error rate between users with and without the feature?

Why the distinction matters

If you only answer the business question, you may stay too abstract. If you only answer the data question, you may optimize for a metric that does not matter. Strong analysis moves deliberately between the two.

A useful pattern is:

Business question → analytical framing → data question → method → decision support

For example:

  • Business question: Did the campaign work?
  • Analytical framing: Define “work” in terms of acquisition efficiency and downstream value.
  • Data question: How did conversion rate, CAC, and 30-day retention differ between exposed and non-exposed users during the campaign period?
  • Method: Cohort comparison, attribution rules, segmentation, baseline comparison.
  • Decision support: Increase spend, change targeting, or stop the campaign.

An analyst should be bilingual: fluent in business language and precise in analytical language.


Identifying the decision behind the request

Many requests are not really requests for information. They are requests for help with a decision.

This is one of the most important habits an analyst can develop: always ask, “What decision will this analysis support?”

Why decisions matter

A decision provides context for everything else:

  • which metric matters most
  • how fast the analysis must be delivered
  • how rigorous the method must be
  • what level of detail is useful
  • which tradeoffs are acceptable

A request without a decision is often too broad.

For example:

  • “Can you analyze retention?” is weak.
  • “We need to decide whether to redesign onboarding this quarter. Can you identify where new users drop off and whether the decline is concentrated in specific segments?” is actionable.

Questions to surface the decision

Useful questions include:

  • What decision are you trying to make?
  • What would you do differently depending on the answer?
  • Is this analysis for exploration, monitoring, or action?
  • Who will use the result, and when?
  • What is at risk if we are wrong?
  • Is the goal to explain, predict, prioritize, or choose?

These questions help distinguish between:

  • curiosity and urgency
  • reporting and diagnosis
  • exploration and commitment
  • strategic and operational needs

Example

A stakeholder says:

“Can you pull product usage metrics for the new feature?”

A stronger analytical response is:

  • What decision is this supporting?
  • Are we evaluating launch success, prioritizing follow-up improvements, or deciding whether to roll out to more users?
  • Which user group matters most?
  • What would count as success?

After clarification, the real need may become:

“We need to decide whether to release the feature to all customers next month, based on adoption, reliability, and effect on retention among early-access users.”

Now the analysis has a purpose.


Clarifying assumptions

Every request contains assumptions. Some are harmless. Some are dangerous. Analysts need to surface both.

Common types of assumptions

Metric assumptions

The requester may assume a metric is valid or sufficient.

  • “Engagement is down” Which engagement metric? Sessions? Time spent? Active days? Feature usage?

Causality assumptions

The requester may assume a cause without evidence.

  • “Sales dropped because of pricing.”
  • “Users are churning because onboarding is confusing.”

These may be hypotheses, not facts.

Population assumptions

The requester may assume the issue is uniform across all users, regions, or products.

  • “Customers are unhappy.”
  • “The campaign underperformed.”

Which customers? Which markets? Which campaign slice?

Time assumptions

The requester may assume a time period is representative.

  • “Performance is declining.”
  • Compared with what period? Previous week? Same month last year? Pre-launch baseline?

Data assumptions

The requester may assume the data exists, is trustworthy, or maps cleanly to the question.

  • Is the event tracked?
  • Is the metric defined consistently?
  • Is there known latency or missingness?
  • Has instrumentation changed?

Clarifying assumptions in practice

The analyst should convert hidden assumptions into explicit statements.

For example:

“When you say churn is rising, do you mean logo churn or revenue churn? And are you comparing the last month to the previous month or to the same month last year?”

Or:

“You suspect the pricing change caused the decline. We can test whether the decline aligns with the rollout timing and whether affected segments differ from unaffected ones, but we should treat pricing as a hypothesis rather than a conclusion.”

This improves both rigor and stakeholder trust.

A useful discipline

When you receive a request, ask yourself:

  • What is being assumed?
  • Which assumptions can be tested?
  • Which assumptions need definition?
  • Which assumptions should be challenged before analysis begins?

Scoping the analysis

Scoping is the process of deciding what the analysis will and will not cover. It protects time, attention, and interpretability.

Weak scoping leads to bloated work: too many metrics, too many slices, too many questions, unclear endpoints. Strong scoping creates a manageable problem.

Dimensions of scope

Objective scope

What exact question will be answered?

Bad scope:

  • Analyze customer behavior.

Better scope:

  • Identify which stages of the trial-to-paid funnel changed after the onboarding redesign.

Population scope

Which users, customers, products, or units are included?

Examples:

  • new users only
  • enterprise customers only
  • users in North America
  • transactions from mobile app sessions
  • active subscriptions created after January 1

Time scope

What period matters?

Examples:

  • last 30 days
  • before and after launch
  • same quarter year-over-year
  • rolling 12 months

Metric scope

Which outcomes will be measured?

Examples:

  • conversion rate
  • retention
  • average order value
  • ticket resolution time
  • gross margin

Analytical scope

What type of analysis is in bounds?

Examples:

  • descriptive trends only
  • segmentation and root cause
  • causal inference not attempted
  • forecast included
  • no model building in this phase

In-scope vs out-of-scope framing

A simple and effective tactic is to write both:

In scope

  • New user onboarding funnel
  • Users acquired through paid channels
  • Comparison between pre-launch and post-launch 30-day windows
  • Activation and Day 7 retention

Out of scope

  • Long-term retention beyond 30 days
  • Existing users
  • Creative-level ad attribution
  • Causal estimation beyond descriptive comparisons

This avoids silent scope creep.

Time and effort realism

Scope should match decision value and deadline. Not every business question requires exhaustive analysis. Sometimes a fast 80% answer is more useful than a perfect answer delivered too late.

Scoping requires judgment:

  • What is the minimum analysis needed to support the decision?
  • What can be deferred?
  • Which slices are essential versus decorative?
  • Is this a one-time investigation or the first phase of a deeper study?

Prioritizing what matters

Analysts operate under constraints: time, data quality, stakeholder attention, and organizational urgency. Good questions are not just precise; they are prioritized.

Prioritization means focusing on leverage

Not every possible question deserves equal weight. Ask:

  • Which question is most tied to the decision?
  • Which metric most directly reflects success or failure?
  • Which segments matter commercially or operationally?
  • Which uncertainty is most costly?
  • Which answer would change action?

Common prioritization lenses

Business impact

Focus first on what affects revenue, cost, risk, customer experience, or strategy.

Decision relevance

Prefer analyses that change what someone will do, not just what they know.

Feasibility

A question with incomplete or unreliable data may need to be reframed.

Urgency

A directional answer today may be more valuable than a perfect answer next month.

Reversibility

If a decision is costly or difficult to reverse, more rigor may be justified.

Avoiding analysis sprawl

A common failure mode is to answer too many secondary questions before answering the primary one. This often happens when analysts try to be thorough without being selective.

For example, in a churn project, the primary question might be:

  • Which factors are most associated with churn among high-value customers in the last two quarters?

But the analysis becomes diluted by unrelated branches:

  • detailed geography cuts
  • every product line regardless of revenue importance
  • vanity engagement metrics
  • exploratory charts with no decision path

Prioritization means explicitly ranking questions:

  1. What do we need to know first?
  2. What do we need to know second?
  3. What is optional?

A useful question

“If I can answer only three things by the deadline, which three matter most?”

That question often reveals what the stakeholder actually values.


Turning requests into an analysis plan

Once the question is clarified, the analyst should convert it into a concrete plan. This is where good questioning becomes structured execution.

A solid analysis plan is not a full technical document. It is a compact translation of the problem into a working approach.

Core components of an analysis plan

1. Problem statement

A one- or two-sentence description of what is being investigated and why.

Example:

We need to understand why trial-to-paid conversion declined after the onboarding redesign so the product team can decide whether to iterate, revert, or continue the rollout.

2. Decision context

What action depends on the answer?

Example:

The product team will decide whether to expand the redesign to all new users next sprint.

3. Primary question

The main analytical question.

Example:

Which parts of the onboarding funnel changed after the redesign, and for which user segments?

4. Secondary questions

Supporting questions, ranked by importance.

Example:

  • Did activation decline overall?
  • Which step had the largest drop-off?
  • Was the change concentrated in mobile users or specific acquisition channels?
  • Did performance vary by geography or device type?

5. Success metrics

How the outcome will be measured.

Example:

  • onboarding completion rate
  • activation rate
  • Day 7 retention
  • error rate during onboarding

6. Population and timeframe

Who and when.

Example:

New users acquired between February 1 and March 31, comparing pre-redesign and post-redesign cohorts.

7. Data sources

Which systems or tables will be used.

Example:

  • user signup events
  • onboarding event logs
  • acquisition source data
  • retention tables

8. Method

The planned analytical approach.

Example:

Funnel analysis, cohort comparison, segmentation by device and channel, and validation of tracking completeness.

9. Constraints and caveats

Known limitations before work begins.

Example:

  • Recent tracking change may affect one onboarding step.
  • Long-term retention is not yet observable for the latest cohort.
  • Results are descriptive and not a full causal estimate.

10. Deliverable

How the result will be communicated.

Example:

A short memo with funnel charts, key segment comparisons, and a recommendation.


A lightweight template for analysts

A practical template is:

Request

What was asked?

Decision

What decision will this support?

Primary question

What is the main thing we need to answer?

Metrics

How will we measure it?

Scope

Who, what, when, and what is excluded?

Assumptions

What is currently being assumed that needs validation?

Method

What analytical approach will be used?

Risks

What data or interpretation limitations might affect confidence?

Output

What format will best support the stakeholder?

This template can be documented informally in notes, tickets, or project briefs.


From vague request to analysis plan: worked examples

Example 1: “Why are sales down?”

This is a common but underspecified request.

Step 1: Clarify the business context

Questions:

  • Which sales metric do you mean: orders, revenue, units, or margin?
  • Compared with what baseline?
  • Which market, product line, or customer segment is the concern?
  • What decision are you trying to make?

Step 2: Identify the decision

Possible decision:

  • Should we intervene on pricing, promotion, inventory, or sales execution?

Step 3: Reframe the question

What factors explain the quarter-over-quarter revenue decline in the North America SMB segment, and which drivers are large enough to require intervention?

Step 4: Build the plan

  • Metrics: revenue, order volume, average order value, discount rate
  • Dimensions: product line, region, channel, customer cohort
  • Timeframe: current quarter vs previous quarter and same quarter last year
  • Method: decomposition of revenue change, segmentation, trend comparison
  • Caveat: attribution to a single cause may not be possible from observational data alone

Now the request is analytically tractable.


Example 2: “Can you build a dashboard for customer success?”

This request sounds operational but still needs questioning.

Step 1: Clarify purpose

Questions:

  • What decisions should the dashboard help make?
  • Who will use it: executives, managers, individual CSMs?
  • Is the goal monitoring, prioritization, or root-cause investigation?
  • What actions should users take after viewing it?

Step 2: Surface actual need

The real need may be:

Customer success managers need to identify at-risk accounts weekly and prioritize outreach.

Step 3: Reframe the question

Which account health indicators best identify near-term churn or renewal risk, and what should be shown in a weekly operational dashboard?

Step 4: Build the plan

  • Metrics: product usage decline, support volume, unresolved tickets, NPS signals, renewal date proximity
  • Population: accounts above a certain ARR threshold
  • Timeframe: weekly refresh, trailing 30-day activity
  • Deliverable: dashboard plus account-prioritization logic
  • Caveat: dashboard alone does not solve prioritization unless thresholds and ownership are defined

The analyst has moved from “build a dashboard” to “define decision-relevant monitoring.”


Example 3: “Did the campaign work?”

Step 1: Clarify success definition

Questions:

  • What does “work” mean: clicks, leads, purchases, revenue, or retention?
  • Compared with what baseline or control?
  • Over what attribution window?
  • Is the decision about scaling, pausing, or redesigning the campaign?

Step 2: Reframe

Did the March paid campaign improve qualified acquisitions at an acceptable cost relative to prior campaigns and baseline channel performance?

Step 3: Plan

  • Metrics: impressions, CTR, conversion rate, CAC, lead quality, Day 30 retention
  • Segments: audience, creative, channel, geography
  • Method: before/after comparison, channel benchmarks, cohort follow-up
  • Caveat: causality depends on attribution quality and possible overlap with other campaigns

Again, the key move is from a binary, vague question to a measurable, decision-oriented one.


Example question trees

Question trees are a practical way to break a broad question into smaller analytical branches. They help analysts organize thinking, expose assumptions, and avoid jumping directly to data pulls without structure.

A question tree starts with a top-level question and branches into progressively more specific subquestions.

Why use question trees

Question trees help with:

  • decomposing broad problems
  • sequencing analysis
  • identifying missing definitions
  • distinguishing primary from secondary questions
  • aligning stakeholders before execution

A good question tree is not a random brainstorm. It should be logically structured, decision-relevant, and scoped.


Question tree example 1: Why is revenue down?

Top-level question

Why is revenue down?

Branch 1: Is revenue actually down, and relative to what?

  • Compared with last week, last quarter, or last year?
  • Is the decline nominal or inflation-adjusted?
  • Is it a temporary fluctuation or a sustained trend?

Branch 2: Is the decline broad or concentrated?

  • Which regions declined?
  • Which product lines declined?
  • Which customer segments declined?
  • Which channels declined?

Branch 3: What component of revenue changed?

  • Fewer customers?
  • Lower order frequency?
  • Lower average order value?
  • Higher discounting?
  • Increased churn?

Branch 4: What operational or market changes coincide with the decline?

  • Pricing changes?
  • Stockouts or fulfillment issues?
  • Competitor actions?
  • Marketing spend changes?
  • Product quality issues?

Branch 5: What action does the business need to consider?

  • Adjust pricing?
  • Change promotions?
  • Reallocate marketing budget?
  • Address supply constraints?
  • Investigate segment-specific churn?

This tree turns a generic executive question into a sequence of analytical tasks.


Question tree example 2: Why is churn increasing?

Top-level question

Why is churn increasing?

Branch 1: Definition and measurement

  • What churn definition are we using: logo churn, user churn, or revenue churn?
  • What period defines churn?
  • Is churn genuinely rising, or did the definition or tracking change?

Branch 2: Where is churn increasing?

  • New customers or mature customers?
  • Small accounts or enterprise accounts?
  • Specific industries or geographies?
  • Specific acquisition channels?

Branch 3: What patterns precede churn?

  • Declining product usage?
  • Increase in support tickets?
  • Failed onboarding?
  • Contract or pricing changes?
  • Reduced stakeholder engagement?

Branch 4: What changed recently?

  • Product releases?
  • Service reliability?
  • Pricing or packaging?
  • Team changes in account management?
  • Market conditions?

Branch 5: What decision must be made?

  • Improve onboarding?
  • Prioritize retention outreach?
  • Adjust pricing?
  • Fix product reliability?
  • Redefine target segments?

This tree ensures that churn is not treated as a single undifferentiated phenomenon.


Question tree example 3: Should we launch this feature to everyone?

Top-level question

Should we roll out the feature broadly?

Branch 1: Adoption

  • Are eligible users discovering the feature?
  • Are they using it repeatedly?
  • Which segments adopt it most?

Branch 2: User value

  • Does usage correlate with improved activation or retention?
  • Are users completing tasks faster or more successfully?
  • Is satisfaction improving?

Branch 3: Reliability and risk

  • Is the feature stable?
  • Are error rates acceptable?
  • Has support burden increased?
  • Are there performance regressions?

Branch 4: Operational readiness

  • Can support, sales, and success teams handle a full rollout?
  • Is documentation ready?
  • Are instrumentation and monitoring sufficient?

Branch 5: Decision thresholds

  • What minimum adoption level is acceptable?
  • What maximum error rate is tolerable?
  • What signals would justify delaying rollout?

This tree links product evaluation to launch criteria rather than mere curiosity.


Traits of strong analytical questions

A strong analytical question is usually:

Specific

It defines the subject, metric, scope, or comparison.

Weak:

  • Are users engaged?

Strong:

  • Has weekly active usage among new mobile users changed since the onboarding redesign?

Decision-oriented

It supports action.

Weak:

  • What is happening with enterprise accounts?

Strong:

  • Which enterprise accounts show the clearest renewal risk signals for proactive outreach this month?

Measurable

It can be answered with available or obtainable data.

Weak:

  • Do customers love the product?

Strong:

  • How have NPS, retention, repeat usage, and support sentiment changed among customers using the new workflow?

Bounded

It has clear scope.

Weak:

  • Analyze marketing performance.

Strong:

  • Compare paid search and paid social performance for first-time customer acquisition in Q1, focusing on CAC and 30-day retention.

Neutral

It does not hard-code the answer.

Weak:

  • How much did the price increase hurt sales?

Stronger:

  • How did sales change after the price increase, and what other factors changed during the same period?

Neutral framing reduces confirmation bias.


Common mistakes when asking or accepting questions

Mistaking a solution for a question

Requests often begin with a proposed solution:

  • “Build a dashboard”
  • “Run an A/B test”
  • “Make a churn model”

The analyst should ask what problem the solution is meant to solve.

Accepting causal language too early

Statements like “because of pricing” or “due to the redesign” may be untested beliefs. Treat them as hypotheses.

Letting the metric remain undefined

Terms like engagement, quality, growth, value, and success require explicit definitions.

Ignoring the decision timeline

An excellent analysis delivered after the decision has already been made has limited value.

Failing to identify exclusions

Without clear exclusions, analysis expands indefinitely.

Trying to answer everything

Breadth can create superficial work. Depth on the highest-value questions is often better.


Practical questions analysts should ask early

When receiving a request, analysts can use a short diagnostic set of questions:

About purpose

  • What decision will this support?
  • Who is the audience?
  • What action depends on the result?

About scope

  • Which population are we focused on?
  • What timeframe matters?
  • Which metric is primary?

About assumptions

  • What do we already believe, and how confident are we?
  • Are we assuming causality?
  • Has anything changed in definitions or tracking?

About constraints

  • When is this needed?
  • What level of rigor is required?
  • What data sources are available and trusted?

About output

  • Do you need a quick answer, a deep-dive analysis, or a recurring report?
  • Should the output be a memo, dashboard, presentation, or recommendation?

These questions are not a script to recite mechanically. They are a framework for disciplined problem framing.


A compact end-to-end example

Suppose a stakeholder says:

“We think onboarding is failing. Can you analyze it?”

A strong analyst might translate that into:

Clarified objective

Determine whether onboarding performance declined after the redesign and whether the decline is concentrated in specific user segments.

Decision

The product team must decide whether to continue, revise, or roll back the redesign.

Primary question

How did activation and step completion rates change for new users after the redesign?

Secondary questions

  • Which onboarding step has the largest drop-off?
  • Is the decline concentrated by device, geography, or acquisition source?
  • Did support contacts or error rates increase during onboarding?

Scope

  • New users only
  • 30 days before and after redesign
  • Mobile and web analyzed separately

Assumptions to test

  • The redesign is the cause of the decline
  • Tracking remained stable across periods
  • Activation definition is unchanged

Method

Funnel comparison, segmentation, instrumentation check, contextual review of release timing.

Deliverable

Short memo with funnel breakdown, likely drivers, caveats, and recommendation.

This is the transition from vague concern to useful analysis.


Closing perspective

Asking good questions is not administrative overhead before “real analysis” begins. It is part of the analysis. In many cases, the highest-leverage contribution an analyst makes is not a chart, model, or SQL query, but a reframed question that changes the direction of the work.

A disciplined analyst learns to pause before solving, identify the decision behind the request, clarify assumptions, bound the scope, prioritize what matters, and write an analysis plan that is fit for purpose.

The quality of the answer rarely exceeds the quality of the question. Strong analysts know that better questions are not a prelude to insight. They are the beginning of it.