Skip to content
research data questions behavior

What People Actually Ask Tarot — 1,261 Question Analysis

We analyzed 1,261 tarot questions submitted to AI readings. 28.1% asked about the future. 13.4% about love. Only 1% about money. Here's what people genuinely turn to tarot for — and what they don't.

Tomasz Fiedoruk 9 min read n=1261

The cards are random. We've established that. Major:Minor distribution lands at 28.4 / 71.6 — exactly what statistical chance predicts.

The questions aren't random.

Out of 1,370 readings in our dataset, 1,261 included question text the user typed in. We categorized those questions using keyword matching and manual spot-checks. The pattern is striking: people don't reach for tarot to ask about whatever happens to be on their mind. They reach for it when they're sitting with one of about six specific kinds of uncertainty.

Here's what 1,261 real questions show — and what that says about why tarot persists as a tool in 2026.

The categories

We sorted question text into seven buckets based on keyword presence:

Category Count % of typed questions Sample keywords
Future / when 354 28.1% future, when, will
Love / relationships 169 13.4% love, relationship, partner
Career / work 129 10.2% work, job, career, business
Money / finance 13 1.0% money, finance, pay
Health 12 1.0% health, illness, pain
Family 9 0.7% family, mother, father
Uncategorized 575 45.6% (mixed or no clear topic)

Two things to notice immediately.

The future category dominates. 28.1% of all questions explicitly ask "when will" or "will this happen." That's not "what does this card mean." That's not "tell me about myself." It's a request for predictive information — exactly the kind of claim tarot historically struggled to defend.

Money and health are nearly absent. Together: 25 questions out of 1,261. Two percent. People aren't using AI tarot for financial advice or medical worry. Whether that's because they don't trust the tool with those questions or because tarot just isn't culturally framed as a tool for them — we don't know. The data shows the absence; it doesn't explain it.

Why "uncategorized" is so big

Almost half of all questions (575, 45.6%) didn't fit our keyword filters. We sampled 100 of them by hand to see what's there.

Most fell into three rough patterns:

Mixed topics in one question. "Will my relationship work out and should I take this job?" Two clear topics, would have hit both love and career filters, but the way it's phrased puts both into one ambiguous bucket. Real life is mixed. Tarot questions follow.

Specific situations without clear keywords. "What does she really think of me?" "Should I send the message?" "What is he hiding?" These are clearly relationship questions, but they don't say "love" or "relationship" — they describe a specific scenario. Our keyword filter missed them. Better NLP would catch most.

Open-ended self-exploration. "What do I need to know right now?" "What energy is around me?" "What should I focus on?" These are more meditative — the user isn't asking for prediction, they're asking for a frame. Tarot does this well. Random three-card spreads function as journaling prompts. The structure matters more than the content.

If we re-ran the categorization with better intent detection, our estimate is the breakdown shifts to roughly 35-40% future-oriented, 25-30% relationships, 15% career, 10% self-exploration, 10% other. The story stays the same: people use tarot for specific kinds of uncertainty, not for everything.

The future obsession

354 questions explicitly ask "when" or "will." Some examples (lightly anonymized, paraphrased):

  • "When will I meet someone who actually wants me?"
  • "Will the lawsuit go through?"
  • "Will I get pregnant this year?"
  • "When does this part of my life end?"
  • "Will he come back?"

Notice the structure. These are almost always closed questions about specific outcomes with strong emotional weight. Not "should I." Not "what if." Just "when" and "will."

Closed questions about future outcomes are the worst possible use case for tarot, statistically speaking. The cards are random. The AI interpretation is generated text. Neither has any access to whether something will or won't happen. But this is exactly what people ask.

Why? Two hypotheses we can't currently distinguish in our data:

Hypothesis 1: People want certainty more than truth. Sitting with "I don't know if she'll come back" is harder than getting a structured response that resolves the ambiguity, even if the response is essentially made up. The AI's confidence (LLMs are bad at saying "I don't know") meets the user's desire for closure. Both parties lose, but it feels productive.

Hypothesis 2: The question is the ritual. Asking "will he come back?" out loud, in writing, with the expectation of a response — that's already useful, regardless of what comes back. The cards force you to articulate what you actually want. The interpretation is secondary.

We suspect Hypothesis 2 is doing most of the work for returning users (the 69 registered who average 4.9 readings each). Hypothesis 1 probably explains why most one-time guests don't come back.

The 13.4% love finding

169 questions explicitly about love or relationships. That's much lower than what tarot stereotypes would predict.

Pinterest tarot infographics, social media tarot posts, the broader cultural framing — they treat tarot as primarily a love tool. "Will he text?" "Does she love me?" "Is this the one?" The visual culture around tarot is heavily relationship-focused.

Our data says love is real but not dominant. It comes in third behind generic "future" and tied with "uncategorized" patterns. People use tarot for relationships, but they use it for at least as many other things.

This may be a sample artifact. AI tarot users might skew differently than crystal-shop tarot users or in-person reader clients. Without comparable data from those contexts, we can't tell.

The career data

129 career questions. 10.2%. Higher than expected, given how tarot is culturally framed.

Sample patterns:

  • "Should I take this job?"
  • "Will my business succeed?"
  • "Is this the right path?"
  • "When will I find work that matters?"

Career questions tend to be more reflective than love questions. Less "will he?" more "should I?" That's a healthier use of tarot — questions that benefit from forced articulation rather than questions that demand prediction.

Worth noting: career questions cluster in our paid-tier users disproportionately. Among the 69 registered, the ones who upgraded to Seeker or Mystic tiers asked about career at roughly 18% — almost double the baseline. Hypothesis: people willing to pay for premium AI interpretation are using tarot more like a coaching tool than a divination tool. Sample is too small (n<10 per tier) to say anything definitive.

What this means for tarot tool design

If we're building a tool that serves what people actually use it for, the data suggests three things:

Don't optimize for prediction. People will ask "will" and "when" no matter what. The tool can't deliver. What it can deliver is a structured response that helps them sit with the question better. The interpretation should be honest about this.

Make articulation the feature. The most valuable thing about a tarot reading isn't the cards. It's the moment between typing the question and seeing the cards drawn — that's when you have to admit what you actually wanted to know. Good tools amplify this. Bad tools rush past it.

Self-exploration is underserved. The 10-15% of users asking open questions ("what should I focus on?") get treated the same way as users asking specific predictions. They probably shouldn't. A reading framed as journaling prompt vs a reading framed as oracle should look different.

We haven't built any of this yet. The current AI tarot tool treats every question the same way. That's our backlog.

What we don't know

The categorization is rough. Our keyword filters miss most relationship questions phrased without the word "relationship." Our NLP doesn't catch sarcasm, hedge phrases, or compound questions. The "uncategorized" 45.6% is probably the most interesting data — and we haven't done it justice.

For Q3 2026, we plan to re-run categorization with semantic embedding similarity rather than keyword matching. Better signal. We'll publish the updated breakdown.

We also don't know:

  • Whether question topic predicts retention. Do future-askers come back less than self-explorers? Hypothesis: yes. Need to check.
  • Whether question topic varies by language. Polish users might ask different things than English users. We have 49 PL readings — not enough to compare.
  • Whether AI provider affects the question types asked. Probably not (users don't choose providers, system assigns). But worth confirming.

Cite this research

Fiedoruk, T. (2026). What People Actually Ask Tarot — 1,261 Question Analysis. aimag.me Research. Retrieved from https://aimag.me/research/tarot-question-patterns

License: CC BY-SA 4.0. Methodology: /research/methodology. Dataset: /research/dataset.

Add your question to the next snapshot

Every reading on aimag.me — anonymous or registered — adds one anonymized data point to the next quarterly analysis.

Try a reading on aimag.me →

The categorization gets better as the dataset grows. Right now we're working with rough patterns. By 5,000 readings we should have something statistically defensible.

Cite this research

If you use this in research, journalism, or analysis:

Fiedoruk, T. (2026). What People Actually Ask Tarot — 1,261 Question Analysis. aimag.me Research. Retrieved from https://aimag.me/research/tarot-question-patterns

License: CC BY-SA 4.0. Dataset: /research/dataset

Want to add your own reading to the next snapshot?

Try a free reading on aimag.me →
Home Cards Reading Sign in