AI LiteracyUX

Does Using AI Tools Make Students Better at Using AI in School? Not Exactly.

July 29, 2025Preprint
AI Literacy Research Illustration

Artificial intelligence is rapidly changing the landscape of education. From AI tutors to various other AI-driven educational tools, these tools promise a more personalized and efficient educational experience. But as we integrate these sophisticated systems, a crucial question emerges: What determines whether students have a good experience with them?

Is it enough for students to have played around with tools like ChatGPT, or is something deeper required?

Our recent research sought to answer this question. We studied the experiences of over 309 undergraduate students as they used Socratic Mind, an AI-powered tool we developed that challenges students to explain, justify, and even defend their answer to showcase their understanding, much like a real-world oral exam.

We wanted to know: What's more important for a positive learning experience with an AI tool? Is it a student's prior exposure to AI (how often they've used it), or their AI literacy (their actual understanding of AI concepts, their skills in using it, and their confidence in doing so)?

The Big Reveal: It's Literacy, Not Just Exposure

Our findings were clear. How often a student had used AI tools in the past had no significant impact on how effective, engaging, or satisfying they found Socratic Mind.

What mattered was AI literacy.

The Data Behind Our Findings

Our statistical analysis reveals the clear differences between students with high and low AI literacy, as well as those with high and low AI exposure:

Table 1: Comparison by AI Literacy Level

Comparison of Satisfaction, Usability, Effectiveness, and Engagement by AI Literacy Level.

VariableAI Literacy LevelNM (1-5 scale)SDt(df)p (2-tailed)Cohen's d
SatisfactionLow1434.220.74-2.87.004-0.33
High1684.460.67(288.44)
UsabilityLow1434.530.65-3.98<.001-0.47
High1684.780.38(220.80)
EffectivenessLow1434.490.62-4.05<.001-0.47
High1684.740.43(245.68)
EngagementLow1434.110.76-5.83<.001-0.68
High1684.570.59(264.34)

Note: M = Mean score based on learner-reported ratings on a 1-5 scale (1 = strongly disagree, 5 = strongly agree).

Table 2: Comparison by AI Exposure Level

Comparison of Satisfaction, Usability, Effectiveness, and Engagement by AI Exposure Level

VariableExposure LevelNM (1-5 scale)SDt(df)p (2-tailed)Cohen's d
SatisfactionLow1724.320.74-0.91.362-0.11
High1274.390.65(287.73)
UsabilityLow1724.660.54-1.09.278-0.12
High1274.720.43(295.57)
EffectivenessLow1724.630.55-0.47.636-0.05
High1274.660.44(295.53)
EngagementLow1724.330.73-1.12.262-0.13
High1274.420.66(284.50)

Note: M = Mean score based on learner-reported ratings on a 1-5 scale (1 = strongly disagree, 5 = strongly agree).

Note: Low exposure group = Exposure score < 2.49. High exposure group = Exposure score ≥ 2.49.

Key Insights from the Data

  • AI Literacy matters: Table 1 shows significant differences (p < .05) across all measures when comparing high vs. low AI literacy students.
  • AI Exposure doesn't: Table 2 reveals no significant differences (all p-values > .05) between high and low AI exposure groups.
  • Effect sizes tell the story: AI literacy shows meaningful effect sizes (Cohen's d = -0.33 to -0.68), while AI exposure shows negligible effects (Cohen's d = -0.05 to -0.13).

Students with a higher level of AI literacy—those who had a better conceptual grasp of AI, felt more confident in their ability to use it, and were more skilled in applying it—consistently reported a better experience. They found the tool:

  • More Usable: Easier and more intuitive to interact with.
  • More Engaging: It held their attention and motivated them to learn.
  • More Satisfying: They had a more positive overall feeling about the interaction.

Connecting the Dots: How Literacy Improves the Learning Experience

Interestingly, the link between AI literacy and learning effectiveness wasn't a clear straight line. Our findings show that AI literacy works indirectly by improving key aspects of the user experience. Here's how it connects:

  • Higher AI Literacy made the tool feel more usable and made students more satisfied with it.
  • This improved sense of usability and satisfaction, in turn, made students feel more engaged and believe the tool was more effective for their learning.

Think of it like this: Just because you've been in a car before (exposure) doesn't mean you can drive it well. But if you understand how traffic works—like when to yield, what signs mean, and how to anticipate other drivers—you feel more confident, enjoy the drive more, and are more likely to reach your destination successfully.

It’s not enough for a tool to be powerful; it must also be perceived as easy to use and supportive. When students feel competent with AI, they are better able to appreciate and benefit from well-designed AI educational tools.

What Does This Mean for Education?

Our research offers two key takeaways for educators and AI developers:

For Educators: Teach AI Literacy, Don't Assume It

We can't assume that because students use AI in their daily lives, they are prepared to use it effectively for learning. We need to actively build AI literacy into the curriculum. This means teaching students the core concepts of AI, giving them skills to use it critically, and building their confidence.

For AI Developers: Design for Everyone

To create AI tools that truly work, we need to focus on the user experience. This means building systems that are intuitive, provide clear feedback, and support students, especially those who may start with lower AI literacy. When a tool is easy to use and makes a student feel supported, it unlocks their ability to learn effectively with it.

Ultimately, the success of AI in education hinges on more than just the technology itself. It's about empowering students to use these tools with confidence and understanding, while ensuring the tools are designed with real human needs in mind. When both sides align—literacy and design—we unlock AI's full potential to enhance learning for everyone.

Contact Us

Authors

Meryem Yilmaz Soylu, Jeonghyun Lee, Jui-Tse Hung , Christopher Zhang Cui, David A. Joyner

Acknowledgements

We thank all the students and faculty who participated in this research study and provided valuable feedback that made this work possible.

Note

Please note that this paper was first uploaded as a preprint to arXiv in July 2025 and has not yet undergone peer review. Therefore, the findings and conclusions should be considered preliminary.