Artificial intelligence is rapidly changing the landscape of education. From AI tutors to various other AI-driven educational tools, these tools promise a more personalized and efficient educational experience. But as we integrate these sophisticated systems, a crucial question emerges: What determines whether students have a good experience with them?
Is it enough for students to have played around with tools like ChatGPT, or is something deeper required?
Our recent research sought to answer this question. We studied the experiences of over 309 undergraduate students as they used Socratic Mind, an AI-powered tool we developed that challenges students to explain, justify, and even defend their answer to showcase their understanding, much like a real-world oral exam.
We wanted to know: What's more important for a positive learning experience with an AI tool? Is it a student's prior exposure to AI (how often they've used it), or their AI literacy (their actual understanding of AI concepts, their skills in using it, and their confidence in doing so)?
Our findings were clear. How often a student had used AI tools in the past had no significant impact on how effective, engaging, or satisfying they found Socratic Mind.
What mattered was AI literacy.
Our statistical analysis reveals the clear differences between students with high and low AI literacy, as well as those with high and low AI exposure:
Comparison of Satisfaction, Usability, Effectiveness, and Engagement by AI Literacy Level.
Variable | AI Literacy Level | N | M (1-5 scale) | SD | t(df) | p (2-tailed) | Cohen's d |
---|---|---|---|---|---|---|---|
Satisfaction | Low | 143 | 4.22 | 0.74 | -2.87 | .004 | -0.33 |
High | 168 | 4.46 | 0.67 | (288.44) | |||
Usability | Low | 143 | 4.53 | 0.65 | -3.98 | <.001 | -0.47 |
High | 168 | 4.78 | 0.38 | (220.80) | |||
Effectiveness | Low | 143 | 4.49 | 0.62 | -4.05 | <.001 | -0.47 |
High | 168 | 4.74 | 0.43 | (245.68) | |||
Engagement | Low | 143 | 4.11 | 0.76 | -5.83 | <.001 | -0.68 |
High | 168 | 4.57 | 0.59 | (264.34) |
Note: M = Mean score based on learner-reported ratings on a 1-5 scale (1 = strongly disagree, 5 = strongly agree).
Comparison of Satisfaction, Usability, Effectiveness, and Engagement by AI Exposure Level
Variable | Exposure Level | N | M (1-5 scale) | SD | t(df) | p (2-tailed) | Cohen's d |
---|---|---|---|---|---|---|---|
Satisfaction | Low | 172 | 4.32 | 0.74 | -0.91 | .362 | -0.11 |
High | 127 | 4.39 | 0.65 | (287.73) | |||
Usability | Low | 172 | 4.66 | 0.54 | -1.09 | .278 | -0.12 |
High | 127 | 4.72 | 0.43 | (295.57) | |||
Effectiveness | Low | 172 | 4.63 | 0.55 | -0.47 | .636 | -0.05 |
High | 127 | 4.66 | 0.44 | (295.53) | |||
Engagement | Low | 172 | 4.33 | 0.73 | -1.12 | .262 | -0.13 |
High | 127 | 4.42 | 0.66 | (284.50) |
Note: M = Mean score based on learner-reported ratings on a 1-5 scale (1 = strongly disagree, 5 = strongly agree).
Note: Low exposure group = Exposure score < 2.49. High exposure group = Exposure score ≥ 2.49.
Students with a higher level of AI literacy—those who had a better conceptual grasp of AI, felt more confident in their ability to use it, and were more skilled in applying it—consistently reported a better experience. They found the tool:
Interestingly, the link between AI literacy and learning effectiveness wasn't a clear straight line. Our findings show that AI literacy works indirectly by improving key aspects of the user experience. Here's how it connects:
Think of it like this: Just because you've been in a car before (exposure) doesn't mean you can drive it well. But if you understand how traffic works—like when to yield, what signs mean, and how to anticipate other drivers—you feel more confident, enjoy the drive more, and are more likely to reach your destination successfully.
It’s not enough for a tool to be powerful; it must also be perceived as easy to use and supportive. When students feel competent with AI, they are better able to appreciate and benefit from well-designed AI educational tools.
Our research offers two key takeaways for educators and AI developers:
We can't assume that because students use AI in their daily lives, they are prepared to use it effectively for learning. We need to actively build AI literacy into the curriculum. This means teaching students the core concepts of AI, giving them skills to use it critically, and building their confidence.
To create AI tools that truly work, we need to focus on the user experience. This means building systems that are intuitive, provide clear feedback, and support students, especially those who may start with lower AI literacy. When a tool is easy to use and makes a student feel supported, it unlocks their ability to learn effectively with it.
Ultimately, the success of AI in education hinges on more than just the technology itself. It's about empowering students to use these tools with confidence and understanding, while ensuring the tools are designed with real human needs in mind. When both sides align—literacy and design—we unlock AI's full potential to enhance learning for everyone.
Meryem Yilmaz Soylu, Jeonghyun Lee, Jui-Tse Hung , Christopher Zhang Cui, David A. Joyner
We thank all the students and faculty who participated in this research study and provided valuable feedback that made this work possible.
Please note that this paper was first uploaded as a preprint to arXiv in July 2025 and has not yet undergone peer review. Therefore, the findings and conclusions should be considered preliminary.