Skip to content
research data skeptic accuracy

Are Tarot Readings Accurate? What 1,370 Real Draws Show

We analyzed 1,370 AI tarot readings from approximately 750 unique participants. The accuracy question is the wrong one — here's what the data can and can't tell us about whether tarot 'works.'

Tomasz Fiedoruk 9 min read n=1370

Most articles answering "are tarot readings accurate" do one of two things. They hedge with vague spiritual language. Or they go full skeptic and dismiss the whole thing as confirmation bias dressed up.

We have data. 1,370 actual readings. Approximately 750 unique participants (mostly anonymous guests, 69 registered). Anonymized, logged, open methodology.

Here's what 1,370 draws say about accuracy — and why "accurate" is the wrong question to start with.

What "accurate" even means in tarot

Three competing definitions. Most articles conflate them.

Predictive accuracy — "the cards predict the future." This is the strongest claim. It requires falsifiable predictions and longitudinal outcome tracking. Did the reading say "you'll meet someone in three months"? Did it happen? Almost no tarot research has this.

Psychological accuracy — "the cards reflect what you already feel." Weaker claim, more defensible. The reading helps you articulate something you couldn't put into words. This is what most defensive tarot proponents actually mean. Testing it requires self-report instruments — something like "did the reading describe your emotional state correctly?" with pre and post measurements.

Pattern accuracy — "the cards reveal patterns in how you ask questions." This is what we can actually test with data. Aggregate behavior across thousands of readings shows something. Whether it's the cards or the questioners, that's a different question.

We have data for #3. We can hint at #2. We have nothing useful on #1.

The Major:Minor ratio: 28.4 vs 71.6

A standard Rider-Waite tarot deck has 22 Major Arcana cards and 56 Minor Arcana cards. Total: 78. Pure mathematical expectation if drawn at random:

  • Major: 22/78 = 28.2%
  • Minor: 56/78 = 71.8%

What we observed across 1,370 readings:

  • Major: 28.4%
  • Minor: 71.6%

Deviation from expected: 0.2 percentage points.

That's not a finding. That's confirmation that the deck draws are statistically indistinguishable from a fair random number generator. Nothing mystical is happening. No card "wants" to come up. The AI's RNG is doing exactly what it's supposed to.

This matters because if you read tarot blogs, you'll find variations of "I drew The Fool three times this week — that's a sign." With 1,370 trials and 78 cards, three Fool draws in any given user's cluster of 5-10 readings is what you'd expect from chance. The pattern feels meaningful because confirmation bias filters out the 30 readings where The Fool didn't appear.

Knight of Wands #1 (78 draws) — and why that's not surprising

Across our dataset, Knight of Wands appeared 78 times — more than any other card.

Expected per-card draw frequency at random: 1370 / 78 = 17.6. Observed top card (Knight of Wands): 78. Variance ratio: 4.4×.

Sounds dramatic. It's not, statistically. With 78 cards drawn 1,370 times, some cards will land in the top tail by chance alone. The chi-square test for whether any single card deviates significantly from random requires roughly 6,000 readings before per-card claims hold up.

We're at 1,370. About a quarter of what's needed. Knight of Wands #1 is interesting. It's not yet statistically significant.

Pinterest infographics will tell you The Lovers, The Sun, The Wheel of Fortune are the most common. Those are the cards that look good in aesthetic boards. Our data, with appropriate sample size caveats, suggests the actual top cards are darker — Knight of Wands, The Hanged Man, The Tower. The cards people draw when they're genuinely uncertain about something.

Where data ends, interpretation begins

Even if the cards are random — and they appear to be — the AI interpretation might still be useful. That's a separate claim.

Here's the trick: random card draws don't have to be predictive to be valuable. They function as forced articulation. Three cards, three positions, suddenly you have to say what your "past, present, future" question actually means. The cards become a Rorschach test you have to talk yourself through.

That's not nothing. That's just journaling with extra steps.

The interpretation quality varies by AI provider. We use four different models depending on user tier:

  • Free tier: Gemini 2.5 Flash (primary) or NVIDIA Llama 3.3 (fallback)
  • Seeker tier: GPT-5.4
  • Mystic tier: Claude Sonnet 4.6 (dual-oracle)

Across our dataset, we logged AI provider per reading. We can compare interpretation length, sentiment patterns, and user ratings between providers. Quick observation: paid-tier interpretations are roughly 40% longer and rate slightly higher in user feedback. Doesn't prove they're "more accurate." Could just mean longer text feels more substantive.

What we'd need to actually test "accuracy"

Real accuracy testing for tarot would require something nobody's done at scale:

  1. Pre-reading measurement. Capture the user's baseline — emotional state, the question, expected outcome.
  2. The reading. Standard procedure, three cards, AI interpretation.
  3. Immediate post-reading measurement. Did the interpretation match how you felt? Did it surface something you weren't aware of?
  4. 6-month follow-up. Did the question resolve? Did the reading help you make a decision? Did the prediction (if any) come true?
  5. Control group. Same questions, randomized "fake" interpretations. Did real readings outperform random text?

This study would cost five to six figures, take a year, and require human review of qualitative data. Nobody's running it. Tarot proponents don't want to. Skeptics don't think it's worth the money.

The closest thing we have is anecdotal. People keep coming back. 69 registered users in our dataset average 4.9 readings each. Among guests, most do 1-3 readings. A long tail returns 5, 10, 30 times without ever creating an account. The behavior pattern says "this is useful for something." The data can't tell us what.

What we can say

Three things, with appropriate caveats:

The cards are random. Major:Minor distribution matches mathematical expectation within 0.2 percentage points. If the AI's RNG has a bias, our sample size can't detect it.

The questions aren't random. 28.1% future-oriented. 13.4% love. 10.2% career. People reach for tarot when they're sitting with specific uncertainty. The cards force the question to have shape.

Some users return. Whether this is a measure of accuracy or a measure of habit-forming UX, we can't tell. Probably it's both. The product makes some users return for reasons that have very little to do with whether the cards "work."

Cite this research

Fiedoruk, T. (2026). Are Tarot Readings Accurate? What 1,370 Real Draws Show. aimag.me Research. Retrieved from https://aimag.me/research/are-tarot-readings-accurate

License: CC BY-SA 4.0. Methodology: /research/methodology. Dataset: /research/dataset.

Want to add your own data point?

The dataset grows. Try a free reading on aimag.me — your reading joins the next quarterly snapshot, anonymized. The math gets stronger with every draw.

Try a reading on aimag.me →

Cite this research

If you use this in research, journalism, or analysis:

Fiedoruk, T. (2026). Are Tarot Readings Accurate? What 1,370 Real Draws Show. aimag.me Research. Retrieved from https://aimag.me/research/are-tarot-readings-accurate

License: CC BY-SA 4.0. Dataset: /research/dataset

Want to add your own reading to the next snapshot?

Try a free reading on aimag.me →
Home Cards Reading Sign in