When universities began integrating generative artificial intelligence into their teaching systems, the hope was that AI would finally solve one of higher education’s oldest and most persistent challenges. Students wanted more feedback, and teachers wanted more time. At first glance, generative AI seemed like the answer. Tools powered by large language models could produce fast, personalised, and stylistically polished comments on essays, lab reports, and even creative projects.
Yet new research led by Thomas Corbin from Deakin University has revealed that something crucial is missing in this seemingly revolutionary shift. The peer-reviewed study, titled “Understanding the Place and Value of GenAI Feedback: A Recognition-Based Framework,” published in Assessment and Evaluation in Higher Education, argues that the real value of feedback in universities is not simply informational. Instead, it depends on the deeply human experience of recognition. Research conducted at Deakin University and Monash University indicates that, despite their increasing sophistication, generative AI systems remain outside the relational fabric that makes feedback effective.
Corbin and his co-authors, Joanna Tai and Gene Flenady, build on established learning theory as well as philosophical work on recognition, particularly from Axel Honneth and Robert Brandom. Their work provides a new conceptual tool for understanding AI in education by distinguishing between recognitive and extra recognitive feedback. This distinction shifts the debate away from whether AI is good or bad for learning and instead asks how AI should be positioned within the wider ecology of feedback in universities.
Why feedback is more than information
For decades, higher education has treated feedback as a commodity. The quality of feedback was assumed to depend on the clarity of comments, the correctness of suggestions, or the precision of assessment criteria. AI fits neatly within this narrow understanding. Large language models excel at generating text that sounds authoritative, detailed, and tailored to the student’s submission.
The study reminds readers that feedback is also a process. Feedback only becomes meaningful when the student interprets, trusts, and uses the information to guide future work. That process depends on a relationship. Teachers and students engage in what the authors describe as a mutually recognitive exchange. Both parties expose themselves to judgment. Students reveal their intellectual vulnerabilities through their work. Teachers expose their scholarly authority to scrutiny when providing an evaluation.
This reciprocal vulnerability is central to what Corbin and colleagues describe as recognitive feedback. When students say they value comments from real teachers, the explanation is not simply about expertise. It is also about being seen as a developing scholar. Recognition, in this context, is not praise. It is the acknowledgment of effort, identity and academic agency. These themes remain widely studied in feedback literacy and learning science, yet they are usually implicit. The new study makes them explicit and places them at the centre of the discussion around generative AI in education.
The philosophical roots of recognition
To explain why recognition matters for learning, the researchers draw on the work of Honneth and Brandom, two influential figures in contemporary social philosophy. Both argue that identity is not an internal possession, but rather something shaped and stabilised through interactions with others.
According to Honneth, individuals develop self-esteem when their contributions are recognised within a community. Students, therefore, need educators not only for guidance but also for confirmation of their intellectual capacities. Where recognition is absent, students may experience misrecognition, which can harm self-confidence and hinder engagement.
Brandom adds a complementary idea. He argues that social agents operate within networks of normative commitments. Teachers hold students accountable to disciplinary standards, while students hold teachers accountable to intellectual honesty, fairness, and expertise. This reciprocal accountability requires trust. Trust, in this sense, becomes the ethical foundation of feedback.
Seen through these lenses, AI cannot act as a recognitive partner. It cannot be vulnerable, nor can it recognise the vulnerability of others. It cannot participate in the emotional labour involved in giving and receiving critique. AI can imitate the form of recognition, but imitation is not the same as recognition. This distinction is central to the framework developed in the study.
When AI becomes useful
The research does not reject the use of AI in education. Instead, it outlines where AI can add value through what the authors call extra recognitive feedback. This type of feedback does not require relational grounding. It includes surface-level guidance such as grammar corrections, structural suggestions, or referencing checks. AI excels at providing timely and repetitive support of this kind.
Students also report that AI provides a low-pressure environment for testing ideas. The authors describe this as a pedagogical sandbox. AI can help learners practise before they expose their thinking to teachers or peers. For students anxious about judgment or early-stage mistakes, the sandbox can build confidence, support motivation, and scaffold more meaningful engagement in class.
The key insight is that extra recognitive feedback can complement recognitive feedback but cannot replace it. Universities, therefore, need strategic integration rather than wholesale substitution. AI should handle tasks where a relational connection is not essential, freeing academics to invest more deeply in human-centered feedback that shapes identity, trust, and intellectual growth.
The risks of misunderstanding AI feedback
The study highlights potential dangers if institutions treat AI as a direct replacement for human evaluation. If students receive only extrinsic recognition, they may perceive their academic development as a purely technical process rather than a relational one. They may miss the affirmation that sustains scholarly identity. Worse, they may find themselves working harder while feeling less seen.
Teachers may also experience misrecognition. If students bypass human feedback in favour of AI, educators may feel their expertise is undervalued. This can erode morale and weaken the collective culture of learning. Recognition, therefore, flows in both directions. Teachers depend on students to acknowledge the meaning and impact of their guidance.
Another risk is that AI feedback systems may inadvertently reinforce surface learning. Because generative AI tends to produce agreeable responses that avoid challenge, students may not encounter the kind of critical friction that triggers deeper thinking. That friction often arises when trusted teachers push students to defend their arguments or reconsider assumptions.
For these reasons, Corbin and colleagues argue that scaling up AI feedback without accounting for recognition could lead to a superficial sense of improvement in teaching quality while undermining the deeper interpersonal foundations of learning.
A framework for the future of AI in education
The recognitive and extra recognitive framework proposed in this research provides a practical tool for universities seeking to integrate AI responsibly. It encourages leaders to ask not only whether AI can generate feedback but also what kind of feedback it should deliver and what kind should remain human.
In policy terms, the framework suggests that academic roles may become more, not less, relational in the age of AI. While AI handles repetitive or technical comments, teachers can focus on providing mentoring, engaging in conversations, and offering epistemic guidance. This approach aligns with broader trends in learning science that centre on feedback literacy, trust formation, and student agency.
The framework also offers a lens for evaluating peer feedback practices. Students often struggle to accept critique from classmates because they do not fully recognise their peers as authoritative interlocutors. Establishing mutual recognition among students could improve the effectiveness of peer review and collaborative learning.
Universities now face a strategic question. Will AI be deployed as a tool that strengthens relationships or as a shortcut that bypasses them? The recognitive framework provides the conceptual clarity needed to answer that question with care. The future of AI in education will depend not only on the power of algorithms but also on the enduring significance of recognition, trust, and human connection in the learning process.
Reference
Corbin, T., Tai, J., & Flenady, G. (2025). Understanding the place and value of GenAI feedback: A recognition based framework. Assessment and Evaluation in Higher Education, 50(5), 718 to 731. https://doi.org/10.1080/02602938.2025.2459641
