New publication: Untangling Critical Interaction with AI

I’ve had a longstanding interest in exploring how students engage critically with automated feedback and develop their AI literacy. In our LAK22 paper, we argued why it is so important that we develop these skills in learners. There is a heightened necessity in today’s educational landscape for learners in the age of generative AI (Gen AI) to engage with AI critically.

Our upcoming CHI publication investigates the fundamental question: Why do students engage with Gen AI for their writing tasks, and how can they navigate this interaction critically? In our paper, we define in concrete terms and stages how criticality can manifest when students write with ChatGPT support. We draw from theory and examples in empirical data (which are still unbelievably scarce in the literature) to understand and expand the notion of critical interaction with AI.

A pre-print version is available for download on Arxiv [PDF]. Full citation below:

Antonette Shibani, Simon Knight, Kirsty Kitto, Ajanie Karunanayake, Simon Buckingham Shum (2024). Untangling Critical Interaction with AI in Students’ Written Assessment. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11-16, 2024, Honolulu, HI, USA. Pre-print: 

A short video presentation gives the gist of the paper [Follow along with the transcript]

Tamil Co-Writer: Inclusive AI for writing support

Next week, I’m presenting my work in the First workshop on
Generative AI for Learning Analytics (GenAI-LA) at the 14th International Conference on Learning Analytics and Knowledge LAK 2024:

Antonette Shibani, Faerie Mattins, Srivarshan Selvaraj, Ratnavel Rajalakshmi & Gnana Bharathy (2024) Tamil Co-Writer: Towards inclusive use of generative AI for writing support. In Joint Proceedings of LAK 2024 Workshops, co-located with 14th International Conference on Learning Analytics and Knowledge (LAK 2024), Kyoto, Japan, March 18-22, 2024.

With colleagues in India, we developed Tamil Co-Writer, a GenAI-supported writing tool that offers AI suggestions for writing in the regional Indian language Tamil (which is my first language). The majority of AI-based writing assistants are created for English language users and do not address the needs of linguistically diverse groups of learners. Catering to languages typically under-represented in NLP is important in the generative AI era for the inclusive use of AI for learner support. Combined with analytics on AI usage, the tool can offer writers improved productivity and a chance to reflect on their optimal/sub-optimal collaborations with AI.

The tool combined the following elements:

  1. An interactive AI writing environment that offers several input modes to write in Tamil
  2. Analytics of writer’s AI interaction in the session for reflection (See post on CoAuthorViz for details, and related paper here)

A short video summarising the key insights from the paper is below:

Understanding human-AI collaboration in writing (CoAuthorViz)

Generative AI (GenAI) has captured global attention since ChatGPT was publicly released in November 2022. The remarkable capabilities of AI have sparked a myriad of discussions around its vast potential, ethical considerations, and transformative impact across diverse sectors, including education. In particular, how humans can learn to work with AI to augment their intelligence rather than undermine it greatly interests many communities.

My own interest in writing research led me to explore human-AI partnerships for writing. We are not very far from using generative AI technologies in everyday writing when co-pilots become the norm rather than an exception. It is possible that a ubiquitous tool like Microsoft Word that many use as their preferred platform for digital writing comes with AI support as an essential feature (and early research shows how people are imagining these) for improved productivity. But at what cost?

In our recent full paper, we explored an analytic approach to study writers’ support seeking behaviour and dependence on AI in a co-writing environment:

Antonette Shibani, Ratnavel Rajalakshmi, Srivarshan Selvaraj, Faerie Mattins, Simon Knight (2023). Visual representation of co-authorship with GPT-3: Studying human-machine interaction for effective writing. In M. Feng, T. K¨aser, and P. Talukdar, editors, Proceedings of the 16th International Conference on Educational Data Mining, pages 183–193, Bengaluru, India, July 2023. International Educational Data Mining Society [PDF].

Using keystroke data from the interactive writing environment CoAuthor powered by GPT-3, we developed CoAuthorViz (See example figure below) to characterize writer interaction with AI feedback. ‘CoAuthorViz’ captured key constructs such as the writer incorporating a GPT-3 suggested text as is (GPT-3 suggestion selection), the writer not incorporating a GPT-3 suggestion
(Empty GPT-3 call), the writer modifying the suggested text (GPT-3 suggestion modification), and the writer’s own writing (user text addition). We demonstrated how such visualizations (and associated metrics) help characterise varied levels of AI interaction in writing from low to high dependency on AI.

Figure: CoAuthorViz legend and three samples of AI-assisted writing (squares denote writer written text, and triangles denote AI suggested text)

Full details of the work can be found in the resources below:

Several complex questions are yet to be answered:

  • Is autonomy (self-writing, without AI support) preferable to better quality writing (with AI support)?
  • As AI becomes embedded into our everyday writing, do we lose our own writing skills? And if so, is that of concern, or will writing become one of those outdated skills in the future that AI can do much better than humans?
  • Do we lose our ‘uniquely human’ attributes if we continue to write with AI?
  • What is an acceptable use of AI in writing that still lets you think? (We know by writing we think more clearly; would an AI tool providing the first draft restrict our thinking?)
  • What knowledge and skills do writers need to use AI tools appropriately?

Edit: If you want to delve into the topic further, here’s an intriguing article that imagines how writing might look in the future:

Questioning Learning Analytics – Cultivating critical engagement (LAK’22)

Gist of LAK 22 paper

Our full research paper has been nominated for Best Paper at the prestigious Learning Analytics and Knowledge (LAK) Conference:

Antonette Shibani, Simon Knight and Simon Buckingham Shum (2022, Forthcoming). Questioning learning analytics? Cultivating critical engagement as student automated feedback literacy. [BEST RESEARCH PAPER NOMINEE] The 12th International Learning Analytics & Knowledge Conference (LAK ’22).

Here’s the gist of what the paper talks about:

  • Learning Analytics (LA) still requires substantive evidence for outcomes of impact in educational practice. A human-centered approach can bring about better uptake of LA.
  • We need critical engagement and interaction with LA to help tackle issues ranging from black-boxing, imperfect analytics, and the lack of explainability of algorithms and artificial intelligence systems, to the required relevant skills and capabilities of LA users when dealing with such advanced technologies.
  • Students must be able to, and should be encouraged to, question analytics in student-facing LA systems as Critical engagement is a metacognitive capacity that both demonstrates and builds student understanding.
  • This puts the power back to users and empowers them with agency when using LA.
  • Critical engagement with LA should be facilitated with careful design for learning; we provide an example case with automated writing feedback – see the paper for details on what the design involved.
  • We show empirical data and findings from student annotations of automated feedback from AcaWriter, where we want them to develop their automated feedback literacy.

The full paper is available for download at this link: [Author accepted manuscript pdf].

This paper was the hardest for me to write personally since I was running on 2-3 hours of sleep right after joining work part-time following my maternity leave. Super stoked to hear about the best paper nomination, as my work as a new mum paid off. Good to be back at work while also taking care of the little bubba 🙂 Thanks to my co-authors for accommodating my writing request really close to the deadline!

Also, workshops coming up in LAK22:

  • Antonette Shibani, Andrew Gibson, Simon Knight, Philip H Winne, Diane Litman (2022, Forthcoming). Writing Analytics for higher-order thinking skills. Accepted workshop at The 12th International Learning Analytics & Knowledge Conference (LAK ’22).
  • Yi-Shan Tsai, Melanie Peffer, Antonette Shibani, Isabel Hilliger, Bodong Chen, Yizhou Fan, Rogers Kaliisa, Nia Dowell and Simon Knight (2022, Forthcoming). Writing for Publication: Engaging Your Audience. Accepted workshop at The 12th International Learning Analytics & Knowledge Conference (LAK ’22).

Automated Writing Feedback in AcaWriter

You might be familiar with my research in the field of Writing Analytics, particularly Automated Writing Feedback during my PhD and beyond. The work is based off an automated feedback tool called AcaWriter (previously called Automated Writing Analytics/ AWA) which we developed at the Connected Intelligence Centre, University of Technology Sydney.

Recently we have come up with resources to spread the word and introduce the tool to anyone who wants to learn more. First is an introductory blog post I wrote for the Society for Learning Analytics Research (SoLAR) Nexus publication. You can access the full blog post here:

We also ran a 2 hour long workshop online as part of a LALN event to add more detail and resources for others to participate. Details are here:

Video recording from the event is available for replay:

Learn more: