

You don’t have an idea problem. You have a prioritization problem. Between the Slack threads, GPT-generated brainstorms, “we should try TikTok” comments, and that one tempting growth hack you saw on LinkedIn at 1 a.m., it’s easy to end every week with 15 new experiments and zero conviction.
Here’s the truth: not every idea is a unicorn, some are just donkeys with glitter. The job is sorting the real opportunities from the glitter. That’s where the ICE framework shines—and where AI turns it from a simple score into a decision engine you can trust.
ICE stands for Impact, Confidence, and Ease. It’s a quick way to prioritize growth ideas on a 1–10 scale so your team can focus on what matters.
You give each idea a score for I, C, and E, then average them to get an ICE score. Rank ideas from highest to lowest. Start at the top. It’s simple, fast, and better than arguing in a meeting.
This is miles better than “HiPPO” (highest paid person’s opinion). But there’s a catch.
Traditional ICE is a great start, but it needs better inputs. And faster, fairer scoring. That’s where AI steps in.
AI doesn’t replace your strategy. It strengthens it with better data, probabilistic thinking, and automation. Here’s how to plug AI into each part of ICE.
1. Impact: Predict outcomes with historical and real-time data
Instead of guessing “Impact = 8,” use AI to model likely outcomes based on your own history and live context.
Output: a predicted impact range (e.g., $60k–$90k incremental revenue over 90 days) and a normalized 1–10 Impact score derived from that range.
2. Confidence: Move from gut feel to evidence-based probabilities
AI won’t tell you the future, but it will estimate probabilities and show its work.
Output: a data-backed probability (e.g., 72% chance of achieving at least $50k incremental revenue) and a matching 1–10 Confidence score, with links to the underlying evidence.
3. Ease: Automate resource and time analysis
Ease shouldn’t rely on someone’s calendar glance. AI can do better with actual constraints.
Output: a predicted time-to-ship, team hours, hard cost, and a normalized Ease score with notes on bottlenecks.
Let’s imagine two options for next quarter:
Experiment A: A micro-influencer campaign on TikTok with 30 creators.
Experiment B: A behavior-based onboarding email revamp with better segmentation and lifecycle triggers.
Without AI (traditional ICE)
Impact:
A: 7 (influencers can drive volume)
B: 7 (activation lifts are valuable)
Confidence:
A: 5 (we haven’t done TikTok yet)
B: 6 (email is known terrain)
Ease:
A: 6 (briefs, contracts, tracking)
B: 7 (we own the stack)
ICE Average:
A: (7 + 5 + 6) / 3 = 6.0
B: (7 + 6 + 7) / 3 = 6.7
B wins, but not decisively. A loud advocate could flip the decision.
With AI-augmented ICE
We connect analytics, CRM, marketing automation, attribution, and project planning data. Then we run each idea through an “ICE bot” that predicts outcomes, assigns probabilities, and estimates effort.
Impact predictions:
A: Based on prior creator campaigns on other platforms and current CAC trends, predicted 1,200 incremental free signups, with 8% activation and $50 LTV. Expected incremental revenue over 90 days ≈ $4,800. Estimated hard cost ≈ $12,000. Net negative within the first 90 days. 30% chance of hitting breakeven if two creators significantly outperform.
B: Based on historical email experiments and activation funnel shape, predicted 15% lift in activation on 20,000 monthly signups → +3,000 activated users per month. With $50 LTV, 90-day realized value ≈ $120,000 (assuming partial LTV realization). Cost ≈ $6,000 in tools and ~40 hours of work.
Normalized Impact scores (1–10): A = 4, B = 9.
Confidence estimates:
A: Model finds limited analogs in your data for TikTok, high variance, and recent CPM inflation. Backtest error wide. Confidence 4/10.
B: Strong analogs across similar lifecycle tests, stable channel, low backtest error. Confidence 7/10.
Ease assessments:
A: 80 hours across growth, creative, and analytics. Vendor contracting plus tracking setup. Two-week lead times. Ease 4/10.
B: 40 hours across lifecycle and design. One dependency on product events. No vendors. Ease 7/10.
ICE Average (AI-enhanced):
A: (4 + 4 + 4) / 3 = 4.0
B: (9 + 7 + 7) / 3 = 7.7
Now the difference isn’t a debate, it’s obvious. You ship B first, then revisit A when your model finds better creator matches or pricing shifts.
You don’t need a data science team to get real value. Start with a lightweight, test-and-learn approach.
1) Standardize your scoring rubric
Define 1–10 anchors for each dimension.
Impact: 1 = <$5k or negligible metric movement; 10 = >$250k or step-change metric movement.
Confidence: 1 = anecdote-only; 10 = multiple in-house analogs and consistent backtests.
Ease: 1 = multiple teams/quarter-scale effort; 10 = single-owner, ship within a week.
Write it down. Consistency is compounding.
2) Centralize your ideas and data
Use a single backlog (Notion, Airtable, or Google Sheet).
For each idea, capture hypothesis, target metric, audience, channel, expected timeline, and dependencies.
Connect sources you already have: analytics (GA4, Amplitude), CRM (HubSpot, Salesforce), experimentation logs, and project tools (Jira, Asana).
3) Build a simple AI scoring assistant
Use an LLM to transform freeform ideas into structured attributes like channel, funnel stage, required skills, and risk factors.
Prompt it to assign a preliminary Ease estimate based on past tasks and your rubric.
Ask it to fetch relevant internal benchmarks (e.g., average activation lift from onboarding tests, average CTR from lifecycle emails).
Require citations: have the assistant link to the dashboards or past experiments it used.
4) Add an impact predictor
Start simple: a regression or rule-based model that estimates incremental outcomes using your historical experiments.
Inputs: baseline metric level, target audience size, channel saturation, similar past tests, creative complexity, and seasonality.
Output: predicted impact range and a normalized 1–10 Impact score.
If you’re non-technical, you can approximate with a spreadsheet model driven by lookups to past tests and conversion baselines. Good enough beats perfect.
5) Quantify confidence the right way
Use backtesting: “If we had used this model last year, how close would it have been?”
Factor in similarity: more analogs = higher confidence.
Include data freshness and drift: older or unstable data = lower confidence.
Normalize to 1–10 and attach the confidence interval so stakeholders see uncertainty, not just a number.
6) Automate the Ease math
Map roles to effort using your project history: typical hours for ad creative, email builds, event instrumentation, QA, and reviews.
Integrate with your PM tool to surface dependencies and queues.
Convert expected hours, hard costs, and dependencies into a 1–10 Ease score with a short note.
7) Create a recurring ritual
Re-score weekly. The world changes, so should your backlog.
Review the top 5 ideas with a small, cross-functional group.
Ship one high-ICE idea and one learning-focused bet each cycle.
Track results and feed them back into your models. That’s how the system compounds.
In my work with founders, CMOs, and product leaders, we start with a two-week sprint:
Week 1: Define the rubric, clean the backlog, connect basic data, and set up an LLM to structure ideas and estimate Ease.
Week 2: Build a simple impact predictor from past experiments, backtest it, set Confidence rules, and pilot the process on 10–15 ideas.
By the end, you have a living prioritization system. Ideas come in. The system scores them. Your team debates trade-offs with evidence, not vibes. And you ship with conviction.
Growth doesn’t come from chasing everything, it comes from prioritizing what works. ICE was already a strong filter. With AI, it becomes sharper, faster, and fairer. You’ll spend less time arguing, more time executing, and you’ll stack wins earlier and more often.
Take your next three growth ideas and score them with ICE + AI. Use your data. Ask your AI assistant to find analogs, predict impact ranges, and estimate Ease based on past work.
Pick the highest score and ship it. Then compare predicted versus actual results to improve your calibration.
If you want a shortcut, join my workshop where we build an AI-augmented ICE system live, step by step, with your ideas. Feel free to contact me here.




