How We Select Articles
A transparent, evidence-based methodology combining algorithmic scoring with expert clinical review
Our Mission
Every week, hundreds of medical AI papers are published across PubMed and arXiv. We combine algorithmic efficiency with expert clinical judgment to identify the 5 most impactful studies for busy clinicians, researchers, and healthcare leaders.
Our goal: Save you time by surfacing only the research that could meaningfully impact clinical practice, patient outcomes, or healthcare delivery.
The Selection Process
Comprehensive Data Collection
We search peer-reviewed journals and preprint servers for medical AI research published in the past week. This yields dozens of articles covering clinical trials, diagnostic studies, and novel AI methods.
Why both peer-reviewed and preprints?
Peer-reviewed articles offer validated, rigorous findings. Preprints provide early access to breakthrough research that may take months to appear in journals. We apply stricter scoring criteria to preprints to ensure quality.
Systematic Quality Assessment
Our system analyzes each article using evidence-based criteria to extract:
- Study design: RCT, meta-analysis, prospective cohort, retrospective study
- Clinical outcomes: Mortality, hospitalization, diagnostic accuracy, quality of life
- Statistical rigor: Effect sizes, confidence intervals, p-values, sample sizes
- Clinical applicability: Patient populations, care settings, implementation feasibility
Multi-Dimensional Scoring
Each article receives scores across four critical dimensions that reflect both scientific rigor and clinical utility. This multi-dimensional approach ensures we're not just selecting technically impressive studies, but research that could meaningfully improve patient care and clinical decision-making.
Our scoring framework is inspired by evidence-based medicine principles, GRADE methodology, and clinical practice guideline development processes. Here's what we evaluate and why each dimension matters:
S1: Practice-Changing Potential (0–10)
Assesses whether research could realistically alter clinical workflows, treatment decisions, or diagnostic approaches within 6-12 months. We prioritize studies that bridge the gap between research and real-world implementation.
S2: Evidence Strength (0–10)
Scores studies based on evidence hierarchy. Stronger designs (RCTs, prospective cohorts) receive higher scores as they minimize bias and provide more reliable estimates.
- • 9-10: Randomized controlled trials, meta-analyses of RCTs
- • 6-8: Prospective cohorts, large registries, externally validated diagnostic studies
- • 4-5: Retrospective studies, single-center studies
- • 0-3: Case series, proof-of-concept studies
S3: Outcome Importance (0–10)
Prioritizes studies measuring patient-important outcomes (mortality, quality of life, symptom relief) rather than surrogate markers or technical metrics.
- • 9-10: Mortality, major morbidity, hospitalization
- • 7-8: Quality of life, functional outcomes, patient-reported measures
- • 5-7: Diagnostic accuracy with clear clinical pathway
- • 0-4: Technical metrics only (AUC, F1 score) without clinical context
S4: Statistical Clarity (0–10)
Requires transparent, interpretable statistics. We penalize studies reporting p-values without effect sizes, vague language ("significant improvement"), or missing confidence intervals.
- • 9-10: Effect sizes with confidence intervals, clinically meaningful magnitude
- • 6-8: Effect sizes with p-values and adequate sample size
- • 4-5: P-values only, or vague claims ("significant improvement")
- • 0-3: No statistics, unclear results, or incomplete reporting
Journal Impact Factor Weighting
We combine clinical scores with journal prestige to create the final algorithmic ranking:
For Peer-Reviewed Articles:
We combine clinical relevance scores with journal quality indicators using a weighted formula.
Why journal quality matters: High-impact journals (Nature Medicine, JAMA, Lancet) have rigorous peer review, editorial oversight, and statistical review. This adds an additional quality signal beyond our clinical assessment.
For Preprints:
Preprints receive a significant scoring adjustment to account for the lack of peer review.
Only breakthrough preprints with exceptional clinical relevance scores make the final selection. This ensures quality while still providing early access to important findings.
Editorial Board Review
This is where human expertise meets algorithmic efficiency.
Our system identifies the highest-scored articles from the weekly pool. Our editorial board then reviews these candidates and selects the final articles for publication.
What the Editorial Board Evaluates:
- •Clinical relevance: Does this address a real problem clinicians face?
- •Specialty balance: Ensuring diverse medical domains (radiology, pathology, cardiology, etc.)
- •Implementation feasibility: Could this realistically be adopted in clinical settings?
- •Conflicts of interest: Checking for industry bias or methodological concerns
- •Contextual factors: Recent controversies, related studies, or clinical guidelines
Why this matters: Algorithms can score study design and statistics, but only experienced clinicians can judge whether a finding is truly actionable, addresses an unmet need, or fits into the broader clinical context. This human oversight ensures every article we publish is worth your time.
Quality Safeguards
🛡️ Anti-Spin Protection
Papers without effect sizes or confidence intervals are automatically capped at low scores.
Example filtered out: "Our AI showed significant improvement" (no numbers, no CI, no p-value)
📊 Complete Reporting Required
Abstracts lacking methods, results, or sample sizes are flagged and penalized.
Example filtered out: "We developed a novel AI system" (no validation, no outcomes, no data)
🎯 Clinical Relevance Gate
Papers without patient cohorts or clinical endpoints receive low outcome scores.
Example filtered out: "Our model achieves 98% accuracy on ImageNet" (no clinical validation)
🔬 Context-Aware Detection
We verify study design claims through contextual analysis, not just keyword matching.
Example: Simply mentioning "randomized" isn't sufficient without supporting methodological context
Frequently Asked Questions
Q: Do you prioritize certain medical specialties?
A: No. Our editorial board actively ensures diversity across specialties (radiology, pathology, cardiology, oncology, etc.). If all top 10 articles are in radiology, we'll select the highest-scoring articles from other domains to provide balanced coverage.
Q: What if a breakthrough study appears in a low-impact journal?
A: Journal quality is weighted alongside clinical scores. A study with exceptional clinical scores (S1-S4) can still be selected even from a lower-tier journal. The editorial board specifically watches for these cases to ensure we don't miss important findings.
Q: How do you handle conflicts of interest?
A: The editorial board reviews funding sources and author affiliations. Industry-funded studies aren't automatically excluded, but we look for independent validation, transparent reporting of limitations, and comparison to existing standards of care.
Q: Why include preprints if they're not peer-reviewed?
A: Preprints provide early access to breakthrough research that may take 6-12 months to appear in peer-reviewed journals. We apply a 44% scoring penalty and stricter editorial scrutiny. Only exceptional preprints make the final cut—typically 0-1 per newsletter.
Q: Can I suggest an article for review?
A: Yes! Email us at [email protected] with the PubMed ID or arXiv link. We'll run it through our scoring system and consider it for the next editorial board review.
Our Commitment to Transparency
We believe readers deserve to understand exactly how articles are selected. Unlike opaque "AI-curated" newsletters, we document:
- The specific criteria used for each score (S1-S4)
- The weighting formula for algorithmic rankings
- The role of the editorial board in final selection
- The safeguards against spin and low-quality abstracts
- Examples of what gets included vs. excluded
Our methodology is open, evidence-based, and designed to evolve based on reader feedback.
Trust the Process. Save Your Time.
Join thousands of clinicians and researchers who rely on AI Rounds for curated, evidence-based medical AI insights.
Subscribe to AI Rounds