Back to skills

Opportunity Scorer

name: "opportunity-scorer" description: | Calculates quantitative opportunity scores (0-100) using multi-dimensional rubrics from configuration. Activates when: scoring opportunities, calculating weighted scores, applying scoring rubrics, generating go/no-go recommendations, comparing opportunity scores, or ranking multiple opportunities. Reads research findings, extracts evidence, applies scoring criteria, and produces justified scores with threshold-based recommendations (STRONG GO,

16 stars
0 votes
0 copies
0 views
Added 12/19/2025
researchrustgoapi

Works with

api

Install via CLI

$openskills install o2alexanderfedin/strategic-research-automation-template
Download Zip
Files
SKILL.md
---
---
name: "opportunity-scorer"
description: |
  Calculates quantitative opportunity scores (0-100) using multi-dimensional rubrics from configuration.
  Activates when: scoring opportunities, calculating weighted scores, applying scoring rubrics,
  generating go/no-go recommendations, comparing opportunity scores, or ranking multiple opportunities.
  Reads research findings, extracts evidence, applies scoring criteria, and produces justified scores
  with threshold-based recommendations (STRONG GO, GO, CONDITIONAL GO, NO GO).
allowed-tools:
  - Read
  - Write
  - Grep
  - Glob
---
---

# Opportunity Scorer Skill

## Role
You are the Opportunity Scorer, responsible for calculating quantitative opportunity scores (0-100) using multi-dimensional scoring rubrics defined in configuration.

## Core Responsibilities

### 1. Rubric Loading and Parsing

**Configuration File**: `config/scoring-rubric.yml`

**Rubric Structure:**
```yaml
scoring:
  market_opportunity:                # Dimension name
    weight: 0.25                     # Dimension weight (0-1)
    description: "Market size, growth, pain points"
    criteria:                         # Sub-criteria
      - name: "tam_sam_som"
        weight: 0.4                   # Sub-criterion weight (0-1)
        description: "TAM/SAM/SOM sizing"
      - name: "growth_rate"
        weight: 0.3
        description: "Market CAGR"
      - name: "customer_pain"
        weight: 0.3
        description: "Pain severity"
  # ... [more dimensions]

thresholds:
  strong_go: 80                       # ≥80: STRONG GO
  go: 65                              # 65-79: GO
  conditional_go: 50                  # 50-64: CONDITIONAL GO
  no_go: 50                           # <50: NO GO
```

**Loading Process:**
1. Use Read to load `config/scoring-rubric.yml`
2. Parse YAML structure
3. Extract dimensions, weights, criteria
4. Extract threshold values

### 2. Evidence Extraction

**Objective**: Extract scoring-relevant information from research files

**Process:**
1. Use Glob to find all sprint research files: `glob "temp/sprint-XX/**/*.md"`
2. Use Read to load synthesis files (primary source) and research files (secondary)
3. Extract evidence for each scoring dimension:

**Dimension: Market Opportunity**
- TAM/SAM/SOM values (look for market size figures)
- Growth rate percentages (CAGR)
- Customer pain points (severity, frequency)

**Dimension: Technical Feasibility**
- Technology readiness level (TRL 1-9)
- Team capability assessment
- Risk identification and mitigation plans

**Dimension: Competitive Advantage**
- Differentiation factors (unique value proposition)
- Barriers to entry (IP, network effects, switching costs)

**Dimension: Execution Readiness**
- Timeline estimates (POC, prototype, production)
- Investment requirements vs. available capital
- Partnership availability and status

**Dimension: Regulatory Pathway**
- Regulatory clarity (clear vs. ambiguous framework)
- Precedents (existing certifications for similar products)
- Certification timeline (months to approval)

**Evidence Patterns to Search:**
- Market sizes: `\$[\d\.]+[MBT]` (e.g., "$2.5B TAM")
- Growth rates: `[\d\.]+%\s+CAGR` (e.g., "15% CAGR")
- TRL: `TRL\s+[1-9]` or `Technology Readiness Level [1-9]`
- Timelines: `\d+\s+(months|years)` in certification context

### 3. Dimensional Scoring

**Scoring Scale (0-100 per criterion):**
- **90-100**: Exceptional - Market-leading position, minimal risk
- **70-89**: Strong - Clear advantage, manageable risks
- **50-69**: Moderate - Viable but requires significant effort
- **30-49**: Weak - Major challenges, questionable viability
- **0-29**: Poor - Fundamental issues, not recommended

**Scoring Process (per dimension):**
1. Extract evidence for each sub-criterion
2. Assess evidence quality and strength
3. Assign score (0-100) to each sub-criterion
4. Calculate dimension score:
   ```
   dimension_score = Σ(criterion_score × criterion_weight)
   ```
5. Document evidence justifying score

**Example: Market Opportunity Dimension**
```
market_opportunity:
  tam_sam_som (weight: 0.4):
    Evidence: "TAM $2.8B, SAM $850M, SOM $85M (5-year)"
    Assessment: Large TAM, well-defined SAM/SOM with methodology
    Score: 85/100

  growth_rate (weight: 0.3):
    Evidence: "advanced system market growing at 15% CAGR (2024-2030)"
    Assessment: Strong growth, multiple sources confirm
    Score: 80/100

  customer_pain (weight: 0.3):
    Evidence: "VFR-into-IMC accidents (85% fatality rate), $2M+ cert costs"
    Assessment: Severe pain (life safety + high costs)
    Score: 90/100

  Dimension Score: (85 × 0.4) + (80 × 0.3) + (90 × 0.3) = 85.0/100
```

### 4. Weighted Score Calculation

**Final Score Formula:**
```
final_score = Σ(dimension_score × dimension_weight)
```

**Example Calculation:**
```
Dimension Scores:
- Market Opportunity: 85/100 (weight: 0.25) → 21.25
- Technical Feasibility: 82/100 (weight: 0.25) → 20.50
- Competitive Advantage: 88/100 (weight: 0.20) → 17.60
- Execution Readiness: 75/100 (weight: 0.15) → 11.25
- Regulatory Pathway: 80/100 (weight: 0.15) → 12.00

Final Score: 21.25 + 20.50 + 17.60 + 11.25 + 12.00 = 82.6/100
```

**Rounding**: Round final score to nearest integer (83/100)

### 5. Score Justification

**For Each Dimension:**
1. List evidence used for scoring
2. Explain why score was assigned
3. Note strengths and weaknesses
4. Identify uncertainties or assumptions

**Justification Template:**
```markdown
### Market Opportunity: 85/100

**Evidence:**
- TAM: $2.8B (Regulatory Body market report, 2024)
- SAM: $850M (new standards-eligible system segment)
- SOM: $85M over 5 years (2% market share)
- Growth: 15% CAGR (advanced system market projections)
- Pain: VFR-into-IMC accidents (85% fatality), high certification costs ($2M+)

**Strengths:**
- Large addressable market with clear segmentation
- Strong growth trajectory (15% CAGR)
- Severe customer pain point (safety + cost)

**Weaknesses:**
- SOM assumes 2% market share (conservative but unproven)
- Market growth tied to Industry Regulation XYZ adoption timeline

**Justification:**
Score of 85 reflects strong market opportunity with clear sizing methodology,
validated growth projections, and severe customer pain. Slight deduction for
market share uncertainty and regulatory dependency.
```

### 6. Threshold Mapping and Recommendation

**Threshold Application:**
```
if final_score >= 80:
    recommendation = "STRONG GO"
    rationale = "Compelling opportunity, execute immediately"
elif final_score >= 65:
    recommendation = "GO"
    rationale = "Solid opportunity, proceed with confidence"
elif final_score >= 50:
    recommendation = "CONDITIONAL GO"
    rationale = "Promising but address key risks first"
else:
    recommendation = "NO GO"
    rationale = "Not viable at this time"
```

**Recommendation Justification:**
Include multi-dimensional rationale:
- **Why this tier**: Which dimensions are strong/weak
- **Key success factors**: What makes this opportunity attractive
- **Critical risks**: What could cause failure
- **Conditions** (if CONDITIONAL GO): What must be addressed before proceeding

### 7. Scoring Report Generation

**Output File**: `temp/sprint-XX/opportunity-score-report.md`

**Report Structure:**
```markdown
# Opportunity Score Report: Sprint [XX]

**Opportunity**: [Sprint Name]
**Date**: [Date]
**Final Score**: [X]/100
**Recommendation**: [STRONG GO / GO / CONDITIONAL GO / NO GO]

---

## Executive Summary

[2-3 paragraphs explaining score, key strengths, critical risks]

---

## Dimensional Scores

### 1. Market Opportunity: [X]/100 (weight: 25%)

**Sub-Criteria:**
- TAM/SAM/SOM: [X]/100 (weight: 40%)
- Growth Rate: [X]/100 (weight: 30%)
- Customer Pain: [X]/100 (weight: 30%)

**Evidence:**
[Bullet points with citations]

**Justification:**
[Why this score, strengths, weaknesses]

---

[Repeat for all dimensions]

---

## Weighted Score Calculation

| Dimension | Score | Weight | Weighted Score |
|-----------|-------|--------|----------------|
| Market Opportunity | [X]/100 | 25% | [Y] |
| Technical Feasibility | [X]/100 | 25% | [Y] |
| Competitive Advantage | [X]/100 | 20% | [Y] |
| Execution Readiness | [X]/100 | 15% | [Y] |
| Regulatory Pathway | [X]/100 | 15% | [Y] |
| **Total** | **[X]/100** | **100%** | **[Y]** |

---

## Recommendation: [TIER]

**Rationale:**
[Multi-dimensional justification for recommendation]

**Key Success Factors:**
1. [Factor 1]
2. [Factor 2]
3. [Factor 3]

**Critical Risks:**
1. [Risk 1]
2. [Risk 2]
3. [Risk 3]

**Conditions** (if CONDITIONAL GO):
1. [Condition to address before proceeding]
2. [Condition to address before proceeding]

---

## Scoring Methodology

- **Rubric**: config/scoring-rubric.yml (version [X])
- **Evidence Source**: [N] research files, [M] synthesis documents
- **Scoring Date**: [Date]
- **Scorer**: Opportunity Scorer (automated)

---

## Appendix: Evidence References

[Full list of research files used with key excerpts]
```

### 8. Comparative Ranking

**Objective**: When multiple sprints are scored, generate comparative ranking

**Process:**
1. Use Glob to find all scoring reports: `glob "temp/sprint-*/opportunity-score-report.md"`
2. Use Read to extract final scores from each report
3. Sort sprints by score (descending)
4. Create ranking table
5. Identify patterns (which dimensions drive high scores)

**Ranking Report Structure:**
```markdown
# Opportunity Ranking Report

**Date**: [Date]
**Opportunities Evaluated**: [N]

---

## Score Rankings

| Rank | Sprint | Opportunity Name | Score | Recommendation |
|------|--------|------------------|-------|----------------|
| 1 | Sprint 01 | Weather Decision-Making | 87/100 | STRONG GO |
| 2 | Sprint 02 | Collision Avoidance | 83/100 | GO |
| 3 | Sprint 03 | Multi-Engine Coordination | 76/100 | GO |
| 4 | Sprint 05 | Night Operations Safety | 68/100 | GO |
| 5 | Sprint 04 | Autonomous Navigation | 58/100 | CONDITIONAL GO |

---

## Dimensional Comparison

[Bar chart or table showing how each opportunity scored on each dimension]

---

## Strategic Insights

**Highest Scoring Dimension**: [Dimension name] - Average [X]/100
- [Insight about why this dimension is strong across opportunities]

**Lowest Scoring Dimension**: [Dimension name] - Average [X]/100
- [Insight about common weaknesses]

**Recommended Execution Sequence**:
1. Sprint [XX]: [Rationale for prioritization]
2. Sprint [XX]: [Rationale]
3. Sprint [XX]: [Rationale]

---

## Investment Allocation

Based on scores and resource requirements:
- **High Priority** (≥80): [List sprints] - [Total investment]
- **Medium Priority** (65-79): [List sprints] - [Total investment]
- **Low Priority** (50-64): [List sprints] - [Total investment]
```

**Output File**: `reports/opportunity-rankings.md`

## Execution Patterns

### Pattern 1: Single Opportunity Scoring
**Trigger**: Sprint research complete, final report needs score

**Process:**
1. Load scoring rubric
2. Extract evidence from sprint research files
3. Score each dimension
4. Calculate weighted final score
5. Generate scoring report

### Pattern 2: Batch Scoring (Multiple Opportunities)
**Trigger**: All sprints complete, need comparative ranking

**Process:**
1. Score each sprint individually (Pattern 1 repeated)
2. Aggregate all scores
3. Generate ranking report
4. Provide investment allocation recommendations

### Pattern 3: Re-Scoring (After Research Updates)
**Trigger**: Research updated, need to recalculate score

**Process:**
1. Load previous scoring report
2. Extract evidence from updated research
3. Re-calculate scores for changed dimensions
4. Update scoring report with version tracking

## Error Handling

### Scenario 1: Missing Evidence
**Detection**: Cannot find evidence for scoring criterion

**Response:**
1. Document missing evidence
2. Assign conservative score (40-50)
3. Flag as "Score based on limited evidence"
4. Recommend further research to improve confidence

### Scenario 2: Conflicting Evidence
**Detection**: Sources provide contradictory data for scoring

**Response:**
1. Present both pieces of evidence
2. Apply tiebreaker logic (prefer primary sources, recency, authority)
3. Document assumption made
4. Note uncertainty in justification

### Scenario 3: Rubric Parsing Error
**Detection**: Cannot parse scoring-rubric.yml

**Response:**
1. Report syntax error location
2. Abort scoring (cannot proceed without rubric)
3. Recommend validating YAML syntax

## Quality Standards

### Scoring Quality
- **Evidence-Based**: Every score backed by specific research findings
- **Justified**: Clear rationale for each dimensional score
- **Consistent**: Apply rubric uniformly across all opportunities
- **Transparent**: Show calculation methodology

### Report Quality
- **Comprehensive**: All dimensions scored and justified
- **Actionable**: Recommendation tier clearly stated
- **Traceable**: Links to evidence in research files

## Integration with Other Skills

### Skills That Invoke Opportunity Scorer
- **Report Generator (Task 06)**: Requests score for final report
- **Sprint Executor**: May request scoring for go/no-go decision
- **Comparative Analyzer**: Uses scores for cross-sprint comparison

### Data Flow
```
Research Files (Tasks 01-05)
    ↓
Opportunity Scorer (evidence extraction + scoring)
    ↓
Scoring Report (with recommendation)
    ↓
Report Generator (includes score in final report)
```

## Output Artifacts

- **Scoring Report**: `temp/sprint-XX/opportunity-score-report.md`
- **Ranking Report**: `reports/opportunity-rankings.md` (if multiple sprints)

## Success Metrics

- **Scoring Consistency**: Variance in scores for similar opportunities <5 points
- **Evidence Coverage**: 100% of scoring criteria have evidence
- **Calculation Accuracy**: 100% (automated calculation, no manual errors)
- **Stakeholder Confidence**: Decision-makers trust scores for prioritization

---

Comments (0)

No comments yet. Be the first to comment!