New publication: Untangling Critical Interaction with AI

I’ve had a longstanding interest in exploring how students engage critically with automated feedback and develop their AI literacy. In our LAK22 paper, we argued why it is so important that we develop these skills in learners. There is a heightened necessity in today’s educational landscape for learners in the age of generative AI (Gen AI) to engage with AI critically.

Our upcoming CHI publication investigates the fundamental question: Why do students engage with Gen AI for their writing tasks, and how can they navigate this interaction critically? In our paper, we define in concrete terms and stages how criticality can manifest when students write with ChatGPT support. We draw from theory and examples in empirical data (which are still unbelievably scarce in the literature) to understand and expand the notion of critical interaction with AI.

A pre-print version is available for download on Arxiv [PDF]. Full citation below:

Antonette Shibani, Simon Knight, Kirsty Kitto, Ajanie Karunanayake, Simon Buckingham Shum (2024). Untangling Critical Interaction with AI in Students’ Written Assessment. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11-16, 2024, Honolulu, HI, USA. Pre-print: https://arxiv.org/abs/2404.06955 

A short video presentation gives the gist of the paper [Follow along with the transcript]

Recognition for Teaching

[Originally posted on LinkedIn]

On Wednesday, I received the ‘2023 Learning and Teaching Citation’ from the Vice-Chancellor of the University of Technology Sydney (UTS) for nurturing well-rounded data science professionals. UTS citations and awards recognise significant and sustained contributions to student learning, student engagement, and the student experience by individuals or teams.



Earlier this week, I also received the ‘Impactful Educator Award‘ in the Partnerships Builder category that recognises educators who build collaboration with external stakeholders into their learning programmes and use elements of innovation to create a demonstrable impact on their students.

I feel deeply honoured and humbled to be recognised for my teaching initiatives and to share the stage with many esteemed educators who are making a positive impact on students. It holds a special place in my heart as a nod to a legacy that I once thought I would not follow. Growing up in a family of educators in India – including both my grandmothers, my mum, and several other relatives – I was taught from a young age how education can uplift individuals and societies. These values were deeply instilled in me, shaping my understanding and appreciation of education, which influenced my research direction. However, interestingly, teaching was never on the cards for me as I trained to be a computer engineer.

The journey from a computer engineer to an educator has been unexpected yet profoundly rewarding for me and comes as a testament to the unpredictable paths our passions can take us. My sincere thanks to colleagues at UTS TD School, mentors, family, and, most importantly, my students, whose enthusiasm and eagerness to learn inspire me. I value every little interaction I’ve had with my students over the last few years, and I hope to have touched some of their lives in a way. I learn from them as much as they learn from me!

The recognition motivates me to continue striving for excellence, innovation, and educational impact. Let’s keep pushing the boundaries of what is possible in education, for the betterment of our students and the future they will create. Thank you once again for this incredible honor. It is a milestone I will cherish deeply, reminding me of the journey so far and the exciting road ahead!

P.S: It was also a lovely memory to share with my family who joined me in the celebration at UTS, and an appreciation for my culture through my attire (a saree, traditionally worn by women in India) 🙂

Tamil Co-Writer: Inclusive AI for writing support

Next week, I’m presenting my work in the First workshop on
Generative AI for Learning Analytics (GenAI-LA) at the 14th International Conference on Learning Analytics and Knowledge LAK 2024:

Antonette Shibani, Faerie Mattins, Srivarshan Selvaraj, Ratnavel Rajalakshmi & Gnana Bharathy (2024) Tamil Co-Writer: Towards inclusive use of generative AI for writing support. In Joint Proceedings of LAK 2024 Workshops, co-located with 14th International Conference on Learning Analytics and Knowledge (LAK 2024), Kyoto, Japan, March 18-22, 2024.

With colleagues in India, we developed Tamil Co-Writer, a GenAI-supported writing tool that offers AI suggestions for writing in the regional Indian language Tamil (which is my first language). The majority of AI-based writing assistants are created for English language users and do not address the needs of linguistically diverse groups of learners. Catering to languages typically under-represented in NLP is important in the generative AI era for the inclusive use of AI for learner support. Combined with analytics on AI usage, the tool can offer writers improved productivity and a chance to reflect on their optimal/sub-optimal collaborations with AI.

The tool combined the following elements:

  1. An interactive AI writing environment that offers several input modes to write in Tamil
  2. Analytics of writer’s AI interaction in the session for reflection (See post on CoAuthorViz for details, and related paper here)

A short video summarising the key insights from the paper is below:

Understanding human-AI collaboration in writing (CoAuthorViz)

Generative AI (GenAI) has captured global attention since ChatGPT was publicly released in November 2022. The remarkable capabilities of AI have sparked a myriad of discussions around its vast potential, ethical considerations, and transformative impact across diverse sectors, including education. In particular, how humans can learn to work with AI to augment their intelligence rather than undermine it greatly interests many communities.

My own interest in writing research led me to explore human-AI partnerships for writing. We are not very far from using generative AI technologies in everyday writing when co-pilots become the norm rather than an exception. It is possible that a ubiquitous tool like Microsoft Word that many use as their preferred platform for digital writing comes with AI support as an essential feature (and early research shows how people are imagining these) for improved productivity. But at what cost?

In our recent full paper, we explored an analytic approach to study writers’ support seeking behaviour and dependence on AI in a co-writing environment:

Antonette Shibani, Ratnavel Rajalakshmi, Srivarshan Selvaraj, Faerie Mattins, Simon Knight (2023). Visual representation of co-authorship with GPT-3: Studying human-machine interaction for effective writing. In M. Feng, T. K¨aser, and P. Talukdar, editors, Proceedings of the 16th International Conference on Educational Data Mining, pages 183–193, Bengaluru, India, July 2023. International Educational Data Mining Society [PDF].

Using keystroke data from the interactive writing environment CoAuthor powered by GPT-3, we developed CoAuthorViz (See example figure below) to characterize writer interaction with AI feedback. ‘CoAuthorViz’ captured key constructs such as the writer incorporating a GPT-3 suggested text as is (GPT-3 suggestion selection), the writer not incorporating a GPT-3 suggestion
(Empty GPT-3 call), the writer modifying the suggested text (GPT-3 suggestion modification), and the writer’s own writing (user text addition). We demonstrated how such visualizations (and associated metrics) help characterise varied levels of AI interaction in writing from low to high dependency on AI.

Figure: CoAuthorViz legend and three samples of AI-assisted writing (squares denote writer written text, and triangles denote AI suggested text)

Full details of the work can be found in the resources below:

Several complex questions are yet to be answered:

  • Is autonomy (self-writing, without AI support) preferable to better quality writing (with AI support)?
  • As AI becomes embedded into our everyday writing, do we lose our own writing skills? And if so, is that of concern, or will writing become one of those outdated skills in the future that AI can do much better than humans?
  • Do we lose our ‘uniquely human’ attributes if we continue to write with AI?
  • What is an acceptable use of AI in writing that still lets you think? (We know by writing we think more clearly; would an AI tool providing the first draft restrict our thinking?)
  • What knowledge and skills do writers need to use AI tools appropriately?

Edit: If you want to delve into the topic further, here’s an intriguing article that imagines how writing might look in the future: https://simon.buckinghamshum.net/2023/03/the-writing-synth-hypothesis/

Questioning Learning Analytics – Cultivating critical engagement (LAK’22)

Gist of LAK 22 paper

Our full research paper has been nominated for Best Paper at the prestigious Learning Analytics and Knowledge (LAK) Conference:

Antonette Shibani, Simon Knight and Simon Buckingham Shum (2022, Forthcoming). Questioning learning analytics? Cultivating critical engagement as student automated feedback literacy. [BEST RESEARCH PAPER NOMINEE] The 12th International Learning Analytics & Knowledge Conference (LAK ’22).

Here’s the gist of what the paper talks about:

  • Learning Analytics (LA) still requires substantive evidence for outcomes of impact in educational practice. A human-centered approach can bring about better uptake of LA.
  • We need critical engagement and interaction with LA to help tackle issues ranging from black-boxing, imperfect analytics, and the lack of explainability of algorithms and artificial intelligence systems, to the required relevant skills and capabilities of LA users when dealing with such advanced technologies.
  • Students must be able to, and should be encouraged to, question analytics in student-facing LA systems as Critical engagement is a metacognitive capacity that both demonstrates and builds student understanding.
  • This puts the power back to users and empowers them with agency when using LA.
  • Critical engagement with LA should be facilitated with careful design for learning; we provide an example case with automated writing feedback – see the paper for details on what the design involved.
  • We show empirical data and findings from student annotations of automated feedback from AcaWriter, where we want them to develop their automated feedback literacy.

The full paper is available for download at this link: [Author accepted manuscript pdf].

This paper was the hardest for me to write personally since I was running on 2-3 hours of sleep right after joining work part-time following my maternity leave. Super stoked to hear about the best paper nomination, as my work as a new mum paid off. Good to be back at work while also taking care of the little bubba 🙂 Thanks to my co-authors for accommodating my writing request really close to the deadline!

Also, workshops coming up in LAK22:

  • Antonette Shibani, Andrew Gibson, Simon Knight, Philip H Winne, Diane Litman (2022, Forthcoming). Writing Analytics for higher-order thinking skills. Accepted workshop at The 12th International Learning Analytics & Knowledge Conference (LAK ’22).
  • Yi-Shan Tsai, Melanie Peffer, Antonette Shibani, Isabel Hilliger, Bodong Chen, Yizhou Fan, Rogers Kaliisa, Nia Dowell and Simon Knight (2022, Forthcoming). Writing for Publication: Engaging Your Audience. Accepted workshop at The 12th International Learning Analytics & Knowledge Conference (LAK ’22).

Automated Writing Feedback in AcaWriter

You might be familiar with my research in the field of Writing Analytics, particularly Automated Writing Feedback during my PhD and beyond. The work is based off an automated feedback tool called AcaWriter (previously called Automated Writing Analytics/ AWA) which we developed at the Connected Intelligence Centre, University of Technology Sydney.

Recently we have come up with resources to spread the word and introduce the tool to anyone who wants to learn more. First is an introductory blog post I wrote for the Society for Learning Analytics Research (SoLAR) Nexus publication. You can access the full blog post here: https://www.solaresearch.org/2020/11/acawriter-designing-automated-feedback-on-writing-that-teachers-and-students-trust/

We also ran a 2 hour long workshop online as part of a LALN event to add more detail and resources for others to participate. Details are here: http://wa.utscic.edu.au/events/laln-2020-workshop/

Video recording from the event is available for replay:

Learn more: https://cic.uts.edu.au/tools/awa/

Automated Revision Graphs – AIED 2020

I’ve recently had my writing analytics work published at the 21st international conference on artificial intelligence in education (AIED 2020) where the theme was “Augmented Intelligence to Empower Education”. It is a short paper describing a text analysis and visualisation method to study revisions. It introduced ‘Automated Revision Graphs’ to study revisions in short texts at a sentence level by visualising text as graph, with open source code.

Shibani A. (2020) Constructing Automated Revision Graphs: A Novel Visualization Technique to Study Student Writing. In: Bittencourt I., Cukurova M., Muldner K., Luckin R., Millán E. (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12164. Springer, Cham. [pdf] https://doi.org/10.1007/978-3-030-52240-7_52

I did a short introductory video for the conference, which can be viewed below:

I also had another paper I co-authored on multi-modal learning analytics lead by Roberto Martinez, which received the best paper award in the conference. The main contribution of the paper is a set of conceptual mappings from x-y positional data (captured from sensors) to meaningful measurable constructs in physical classroom movements, grounded in the theory of Spatial Pedagogy. Great effort by the team!

Details of the second paper can be found here:

Martinez-Maldonado R., Echeverria V., Schulte J., Shibani A., Mangaroska K., Buckingham Shum S. (2020) Moodoo: Indoor Positioning Analytics for Characterising Classroom Teaching. In: Bittencourt I., Cukurova M., Muldner K., Luckin R., Millán E. (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163. Springer, Cham. [pdf] https://doi.org/10.1007/978-3-030-52237-7_29

New Research Publications in Learning Analytics

Three of my journal articles got published recently, two on learning analytics/ writing analytics implementations [Learning Analytics Special Issue in The Internet and Higher Education journal], and one on a text analysis method [Educational Technology Research and Development journal]. that I worked on earlier (many years ago in fact, which just got published!).

Article 1: Educator Perspectives on Learning Analytics in Classroom Practice

The first one is predominantly qualitative in nature, based on instructor interviews of their experiences in using Learning Analytics tools such as the automated Writing feedback tool AcaWriter. It provides a practical account of implementing learning analytics in authentic classroom practice from the voices of educators. Details below:

Abstract: Failing to understand the perspectives of educators, and the constraints under which they work, is a hallmark of many educational technology innovations’ failure to achieve usage in authentic contexts, and sustained adoption. Learning Analytics (LA) is no exception, and there are increasingly recognised policy and implementation challenges in higher education for educators to integrate LA into their teaching. This paper contributes a detailed analysis of interviews with educators who introduced an automated writing feedback tool in their classrooms (triangulated with student and tutor survey data), over the course of a three-year collaboration with researchers, spanning six semesters’ teaching. It explains educators’ motivations, implementation strategies, outcomes, and challenges when using LA in authentic practice. The paper foregrounds the views of educators to support cross-fertilization between LA research and practice, and discusses the importance of cultivating educators’ and students’ agency when introducing novel, student-facing LA tools.

Keywords: learning analytics; writing analytics; participatory research; design research; implementation; educator

Citation and article link: Antonette Shibani, Simon Knight and Simon Buckingham Shum (2020). Educator Perspectives on Learning Analytics in Classroom Practice [Author manuscript]. The Internet and Higher Education. https://doi.org/10.1016/j.iheduc.2020.100730. [Publisher’s free download link valid until 8 May 2020].

Article 2: Implementing Learning Analytics for Learning Impact: Taking Tools to Task

The second one led by Simon Knight provides a broader framing for how we define impact in learning analytics. It defines a model addressing the key challenges in LA implementations based on our writing analytics example. Details below:

Abstract: Learning analytics has the potential to impact student learning, at scale. Embedded in that claim are a set of assumptions and tensions around the nature of scale, impact on student learning, and the scope of infrastructure encompassed by ‘learning analytics’ as a socio-technical field. Drawing on our design experience of developing learning analytics and inducting others into its use, we present a model that we have used to address five key challenges we have encountered. In developing this model, we recommend: A focus on impact on learning through augmentation of existing practice; the centrality of tasks in implementing learning analytics for impact on learning; the commensurate centrality of learning in evaluating learning analytics; inclusion of co-design approaches in implementing learning analytics across sites; and an attention to both social and technical infrastructure.

Keywords: learning analytics, implementation, educational technology, learning design

Citation and article link:  Simon Knight, Andrew Gibson and Antonette Shibani (2020). Implementing Learning Analytics for Learning Impact: Taking Tools to Task. The Internet and Higher Education. https://doi.org/10.1016/j.iheduc.2020.100729.

Article 3: Identifying patterns in students’ scientific argumentation: content analysis through text mining using LDA

The third one led by Wanli Xing discusses the use of Latent Dirichlet Allocation, a text mining method to study argumentation patterns in student writing (in an unsupervised way). Details below:

Abstract: Constructing scientific arguments is an important practice for students because it helps them to make sense of data using scientific knowledge and within the conceptual and experimental boundaries of an investigation. In this study, we used a text mining method called Latent Dirichlet Allocation (LDA) to identify underlying patterns in students written scientific arguments about a complex scientific phenomenon called Albedo Effect. We further examined how identified patterns compare to existing frameworks related to explaining evidence to support claims and attributing sources of uncertainty. LDA was applied to electronically stored arguments written by 2472 students and concerning how decreases in sea ice affect global temperatures. The results indicated that each content topic identified in the explanations by the LDA— “data only,” “reasoning only,” “data and reasoning combined,” “wrong reasoning types,” and “restatement of the claim”—could be interpreted using the claim–evidence–reasoning framework. Similarly, each topic identified in the students’ uncertainty attributions— “self-evaluations,” “personal sources related to knowledge and experience,” and “scientific sources related to reasoning and data”—could be interpreted using the taxonomy of uncertainty attribution. These results indicate that LDA can serve as a tool for content analysis that can discover semantic patterns in students’ scientific argumentation in particular science domains and facilitate teachers’ providing help to students.

Keywords: text mining, latent dirichlet allocation, educational data mining, scientific argumentation

Citation and article link:  Wanli Xing, Hee-Sun Lee and Antonette Shibani (2020). Identifying patterns in students’ scientific argumentation: content analysis through text mining using Latent Dirichlet Allocation. Educational Technology Research and Development. https://doi.org/10.1007/s11423-020-09761-w.

2019 Year in review

Welcome 2020! A new year is the perfect time to reflect on the past year, so I wanted to take a step back and think about it. 2019 was one of the most successful years for me professionally (and personally) with a range of experiences and productive outcomes. Quite a few achievements I’m really proud of happened this year. This post is mostly a note for myself to remind me of all those 🙂

I started the year on a positive note – I allocated quality time for me to do some coding for a novel graph analysis method I developed for writing analytics. Recovering from my laptop loss from the previous year (noting how important backing up your work is), I redid it from scratch, and made a version better than what I had last time. Coding up those interactive automated revision graphs was probably the first successful outcome in the year for me.

My biggest achievement this year was completing my PhD from the Connected Intelligence Centre. Even at the start of the year, I hadn’t started writing my thesis and I was still finishing up data analysis. Even when I started writing my thesis in February, I was unsure if I could complete it before the August deadline. The main chapter seemed like a monster job since most of the analysis had to be done newly and I hadn’t written it up before. The best decision I made at that point was to start off with this hard chapter instead of the starting or the easier ones, where I had already written stuff (like a lit review or an introduction). A pat on the back – I stuck with the deadline of completing it before I flew out to LAK19 in March- it was quite intense, both emotionally and physically taxing, but I made it! I emailed the first version of this chapter with an overall skeleton of the thesis to my supervisors when I was on a bus home – I was literally making use of every minute I had before flying out to the conference.

My participation in LAK19 was quite a success. I’ve written a whole post on it before, so I’m not gonna dive into details. But I presented a full paper and got some amazing comments, facilitated a workshop (almost solo since my co-organizers couldn’t make it at the last minute) and joined the SoLAR executive committee. I had received the ACM-Women in Computing Scholarship to attend this conference.

While I was writing the rest of my thesis, I applied for a Lectureship at UTS Faculty of Transdisciplinary Innovation and got it! I decided to go for this one over postdoctoral research positions to stay long term in academia. Searching and applying for jobs are such an ordeal and my skills were dusty; I’m super glad mine went smoothly since it was the only job I applied for, and the timing worked out perfectly.

I had to start the lectureship in July, which pushed my thesis submission deadline a month earlier. I couldn’t take a break after thesis submission, so I took a small break after sending out the full draft of my thesis to my internal reviewers in June. I went home to India for 2 weeks, which just flew by. I worked super hard to submit the final thesis after my return, to the point where I didn’t really want to take another look at it anymore! Finally, I submitted my thesis on the 25th of July, 2019.

I started lecturing right from the first month of me joining the Faculty of Transdisciplinary Innovation. It was pretty hard, truth be told, as I was trying to juggle between a few different things. First time teaching a subject from preparation to delivery, handling student queries, the admin, the mentoring, managing difficult students – it was a handful. I even dropped my plans to take part in the 3MT competition coz my schedule was so tight.

In the meantime, the reviews for my thesis came back. I passed with flying colours and the reviews were extremely positive, with appreciation of it being one of the best theses the reviewers had reviewed! Both reviewers accepted the thesis for publication without any changes. I did make some minor changes for final publication based on their comments and my degree was conferred on the 12th of November 2019.

I also got a few invitations (both internal and external to my university) to take part in events, which went really great. I was invited as a panel speaker at Intel, Sydney where we discussed ‘Artificial Intelligence Today for Our Tomorrow‘ with some great minds. I co-organised a workshop with our Faculty staff on Data in September for the Festival of Learning Design. I gave a short talk at UTS TeachMeet “The Future Starts Now“ in October,  hosted by the School of International Studies and Education, UTS – Video of Highlights here. I visited the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University, Melbourne in October to participate as an invited delegate at the “Advancing research in student feedback literacy” international symposium – had good conversations and set plans to move the research forward in our upcoming work.

I received the Future Women Leaders Conference Award and visited Monash for two days in November for the conference, where there were a series of workshops and talks supporting future women leaders in academia from engineering and IT. I also created from scratch and published a podcast (Episode 3 of SoLAR Spotlight) that month – lots of learning happened in putting it together, from preparation to editing. I do have regrets in turning down some good opportunities that came my way, just because I was not having enough hours in a day to manage everything. But I guess it is a part of growing as an academic, since you prioritize and decide what is more important, and try to achieve work-life balance. In the end of November, I co-organized a workshop at ALASI. That was the end of work-related events in 2019, but the best was yet to come.

I went to India in December for my long-awaited wedding with my sweetheart. It was a big fat south Indian wedding, so lots of prep and stress, but loads of fun! Here’s a picture from the wedding 🙂

Notes: ‘Digital support for academic writing: A review of technologies and pedagogies’

I came across this review article on writing tools published in 2019, and wanted to make some quick notes to come back to in this post. I’m following the usual format I use for article notes which summarizes the gist of a paper with short descriptions under respective headers. I had a few thoughts on what I thought the paper missed, which I will also describe in this post.

Reference:

Carola Strobl, Emilie Ailhaud, Kalliopi Benetos, Ann Devitt, Otto Kruse, Antje Proske, Christian Rapp (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers & Education 131 (33–48).

Aim:

  • To present a review of the technologies designed to support writing instruction in secondary and higher education.

Method:

Data collection:

  • Writing tools collected from two sources: 1) Systematic search in literature databases and search engines, 2) Responses from the online survey sent to research communities on writing instruction.
  • 44 tools selected for fine-grained analysis.

Tools selected:

Academic Vocabulary
Article Writing Tool
AWSuM
C-SAW (Computer-Supported Argumentative Writing)
Calliope
Carnegie Mellon prose style tool
CohVis
Corpuscript
Correct English (Vantage Learning)
Criterion
De-Jargonizer
Deutsch-uni online
DicSci (Dictionary of Verbs in Science)
Editor (Serenity Software)
escribo
Essay Jack
Essay Map
Gingko
Grammark
Klinkende Taal
Lärka
Marking Mate (standard version)
My Access!
Open Essayist
Paper rater
PEG Writing
Rationale
RedacText
Research Writing Tutor
Right Writer
SWAN (Scientific Writing Assistant)
Scribo – Research Question and Literature Search Tool
StyleWriter
Thesis Writer
Turnitin (Revision Assistant)
White Smoke
Write&Improve
WriteCheck
Writefull

Inclusion criteria:

  • Tools intended solely for primary and secondary education, since the main focus of the paper was on higher education.
  • Tools with the sole focus on features like grammar, spelling, style, or plagiarism detection were excluded.
  • Technologies without an instructional focus, like pure online text editors and tools, platforms or content management systems excluded.

I have my concerns in the way tools were included for this analysis, particularly because some key tools like AWA/ AcaWriter,
Writing Mentor, Essay Critic, and Grammarly were not considered. This is one of the main limitations I found in the study. It is not clear how the tools were selected in the systematic search as there is no information about the databases and keywords used for the search. The way tools focusing on higher education were picked is not explained as well.

Continue reading “Notes: ‘Digital support for academic writing: A review of technologies and pedagogies’”