Notes: ‘Digital support for academic writing: A review of technologies and pedagogies’

I came across this review article on writing tools published in 2019, and wanted to make some quick notes to come back to in this post. I’m following the usual format I use for article notes which summarizes the gist of a paper with short descriptions under respective headers. I had a few thoughts on what I thought the paper missed, which I will also describe in this post.

Reference:

Carola Strobl, Emilie Ailhaud, Kalliopi Benetos, Ann Devitt, Otto Kruse, Antje Proske, Christian Rapp (2019). Digital support for academic writing: A review of technologies and pedagogies. Computers & Education 131 (33–48).

Aim:

  • To present a review of the technologies designed to support writing instruction in secondary and higher education.

Method:

Data collection:

  • Writing tools collected from two sources: 1) Systematic search in literature databases and search engines, 2) Responses from the online survey sent to research communities on writing instruction.
  • 44 tools selected for fine-grained analysis.

Tools selected:

Academic Vocabulary
Article Writing Tool
AWSuM
C-SAW (Computer-Supported Argumentative Writing)
Calliope
Carnegie Mellon prose style tool
CohVis
Corpuscript
Correct English (Vantage Learning)
Criterion
De-Jargonizer
Deutsch-uni online
DicSci (Dictionary of Verbs in Science)
Editor (Serenity Software)
escribo
Essay Jack
Essay Map
Gingko
Grammark
Klinkende Taal
Lärka
Marking Mate (standard version)
My Access!
Open Essayist
Paper rater
PEG Writing
Rationale
RedacText
Research Writing Tutor
Right Writer
SWAN (Scientific Writing Assistant)
Scribo – Research Question and Literature Search Tool
StyleWriter
Thesis Writer
Turnitin (Revision Assistant)
White Smoke
Write&Improve
WriteCheck
Writefull

Inclusion criteria:

  • Tools intended solely for primary and secondary education, since the main focus of the paper was on higher education.
  • Tools with the sole focus on features like grammar, spelling, style, or plagiarism detection were excluded.
  • Technologies without an instructional focus, like pure online text editors and tools, platforms or content management systems excluded.

I have my concerns in the way tools were included for this analysis, particularly because some key tools like AWA/ AcaWriter,
Writing Mentor, Essay Critic, and Grammarly were not considered. This is one of the main limitations I found in the study. It is not clear how the tools were selected in the systematic search as there is no information about the databases and keywords used for the search. The way tools focusing on higher education were picked is not explained as well.

Continue reading “Notes: ‘Digital support for academic writing: A review of technologies and pedagogies’”

LAK 2019 in Tempe, Arizona

I attended the Learning Analytics and Knowledge Conference LAK this year in the midst of my tight thesis writing schedule, and did not regret it 🙂 This 9th International LAK (4-8 Mar, 2019) was held in Tempe, Arizona which meant a flight travel of 15 hours + transit from Sydney one way; I survived, thankfully.

First of all, I was excited to have been awarded a scholarship from ACM-W that supports Women in Computing for conference travel. And super excited to have received it for LAK which competes with journals in publishing some of the most influential work in educational technology.

I kicked off LAK2019 with the full day Writing Analytics workshop I chaired on Advances in Writing Analytics: Mapping the state of the field. While the other workshop organizers could not make it that day which was unfortunate, I’m thankful for the support from UTS CIC colleagues and the participants for helping to run a successful workshop. This fourth workshop in the series of Writing Analytics workshops in LAK had great participation and discussions. We saw interesting presentations on writing analytics from various speakers, and tried a demo version of AcaWriter to see the tool in action – check out tweets with #WaLAK19 and #LAK19. We brainstormed utopian and dystopian visions of how writing analytics in 2030 would look like, and discussed ways to get to a desirable future from where we are now. The potential formation of a Special Interest Group on Writing Analytics (SIGWA) was discussed to facilitate a community of researchers in the area. Notes from the workshop are shared here.

In the main conference, I presented our full research paper, co-authored by Dr. Simon Knight and Prof. Simon Buckingham Shum on Contextualizable Learning Analytics Design: A Generic Model and Writing Analytics Evaluations. We emphasized the need for flexible Learning Analytics Applications that can provide contextualized support, and demonstrated the CLAD model with our example.

I recommend watching the key note recordings from LAK’19, which are added in the SOLAR youtube channel. I would have loved to go into more detail to highlight some of the interesting work across LAK, but my notes for this conference are shorter than my usual notes since I’m now back to thesis writing and frantically managing time 😂. I did come across exciting work and meet lots of interesting people, most of whom I followed-up (I think!), so hope there would be new collaborations! I also officially joined the Society of Learning Analytics (SOLAR) executive committee as the elected student member. Thrilled and looking forward to serving on the committee!


Contextualizable learning analytics for writing support

Recently I gave a talk on Augmenting pedagogical writing support with contextualizable learning analytics at the CRLI seminar series in the University of Sydney.  It was a great opportunity to share and discuss ideas from my PhD research, and indeed a privilege to be invited to present at this seminar. Long time slot means less time constraints, so I enjoyed doing the 1 hour+ session. The talk is recorded and available for viewing on Youtube, and the slides are here. This post is a summary of the key ideas from this talk and an upcoming paper on ‘Contextualizable Learning Analytics Design (CLAD)’.

Big data, learning analytics and education:

Big data and artificial intelligence are changing many ways we do things to improve our lives (for better or for worse). Companies around the world including Facebook, Google, Apple and Amazon use data everyday to get big insights to support us. What can the more traditional organizations like educational institutions use data for? Can we harness this technology and data to improve learning? To answer these questions, Learning Analytics (LA) emerged as a field to attempt tackling huge amounts of data in education. Although data was previously available in education research for decades, different granularities of data from multiple sources in authentic scenarios and technical affordances of new tools can now support many causes which were not previously plausible. This root cause for the inception of the field has probably been a reason for its emphasis on ‘big impact’ and generalizable solutions that can cater to and scale up to huge numbers. Massive Open Online Courses (MOOCS) are a classic example of how we can scale teaching to a large number of learners using technology. However, the problem with scalable, generalizable solutions in learning analytics is that education is inherently contextual, and a one-size-fits all approach would not work in all contexts the same way. This has led to the argument on moving from big data to meaningful data for learning analytics.

Bringing in the context:

To bring the educational context to Learning Analytics (LA), it must be coupled with pedagogical approaches. This involves the integration of LA in pedagogical contexts to augment the learning design and provide analytics that are aligned with the intended learning outcomes. Learning Design (LD) describes an educational process, and involves the design of units of learning, learning activities or learning environment which are pedagogically informed. LA can provide the necessary data, methodologies and tools to test the assumptions of the learning design, and LD can add value to the analytics by making it meaningful for the learner. By bringing LA and LD together, they can contribute to each other and close the gap between the potential and actual use of technology.

Contextualizable Learning Analytics Design:

We introduce the Contextualizable Learning Analytics Design (CLAD) model in a forthcoming article by bringing together the elements of LA and LD for context. The educators are involved with LA developers to co-design this contextualization. This involves LD elements of assessment and task design, and LA elements of features and feedback working dynamically and in sync for different contexts, rather than being rigidly fixed. The CLAD model is demonstrated by implementing the Writing Analytics tool ‘AcaWriter’ in different learning contexts (Law essay writing, Accounting business report writing). AcaWriter, developed by the Connected Intelligence Centre, UTS provides automated feedback on student writing based on rhetorical moves. To contextualize the use of this LA tool for students, the elements of the CLAD model were employed as follows:

  • Assessment formed the basis of contextualization to align AcaWriter with the intended learning outcomes.
  • The features of data that are important for the context were picked so that AcaWriter can bring them to the attention of the learners.
  • The feedback from AcaWriter was tuned to make it relevant for the context of writing by mapping it back to assessment criteria.
  • Task design ensured that AcaWriter activities are relevant to the learner and grounded by pedagogic theory.

With such contextualized LA, the educator has agency to design learning analytics that is relevant to the learning context, and the learner finds it meaningful due to its embedding in the curriculum. This ensures that LA contributes to learning in authentic practice by augmenting existing good pedagogic practice. The approach scales over multiple learning contexts by transferring good design patterns from one learning context to another (for example from law essay writing to accounting business report writing).

More details on the above can be found in the following article, and related resources are available on the HETA project website.

References:

Working with Jupyter notebooks #code

Jupyter is an open source program that helps you share and run code in many different programming languages. Jupyter notebooks are great to quickly prototype different versions of code, as they are easy to edit and try different outputs. The format of a Jupyter notebook is similar to reports in the form of Markdowns that are usually used in R. It can contain blocks of text, code, equations and results (including visualizations) all in one page. We’ve used Jupyter notebooks to run text analysis workshops in conferences, and the feedback was pretty good.

The Writing Analytics workshop is starting at #LAK18. Jupyter notebooks are being used. #great pic.twitter.com/56Zd66ku9L

I find that Jupyter notebooks are great for sharing code and results across different people, and if you’re hosting it, it saves a lot of trouble in organising a workshop where you want participants to install software. It works well for non-technical audience too, since they can choose to ignore what’s inside the code block by simply running it and focus on the results block. They are quite popular now for data science experiments, so this post will be a good place to start to know and use them. You can use an already available notebook (if you’ve downloaded one from Github) and play with it, or create your own Jupyter notebook from scratch. This post will guide you to create your own notebook from scratch demonstrating some basic text analysis in Python.

Installing Jupyter

If you want to try a Jupyter notebook first without installing anything, you can do so in this notebook hosted in the official Jupyter site. If you want to install your own copy of Jupyter running in your machine to develop code, then use one of the two options below:

  • If you are new to Python programming, and don’t have python installed in your machine, the easiest way to install Jupyter is by downloading the Anaconda distribution. This comes with in-built Python (you can choose either 2.7 or 3.6 version of Python when you download the distribution – the code I’m writing in this post is in 2.7).
  • If you already have Python working in your machine (as I did), the easiest way is to install Jupyter using the pip command as you do for any Python package. Note that if pip and python are already setup in your system path, you can simply use $ pip install jupyter from the command prompt.

Now that Jupyter is installed, type the command below in your anaconda prompt/command prompt to start a Jupyter notebook:

$ jupyter notebook

The Jupyter homepage opens in your default browser at http://localhost:8888, displaying the files present in the current folder like below. You can now create a new Python jupyter notebook by clicking on New -> Python2 (or Python 3 if you have Python version 3). You can move between folders or create a new folder for your Python notebooks. To change the default opening directory, you should first move to the required path using cd in the command prompt, and then type$ jupyter notebookOpen the created notebook, which would look like this:

This cell is a code block by default, which can be changed to a markdown text block from the drop-down list (check the figure above) to add narrative text accompanying the Python code. Now name your notebook, and try adding both a code block, and markdown block with different levels of text following the sample here:

To execute the blocks, click on the Run button (Alternatively, use Ctrl+Enter in Windows – Keyboard shortcuts can be found in Help -> Keyboard shortcuts). This renders the output of your code and your markdown text like this:

That’s it. You have a simple Jupyter notebook running on your machine. Now to try a bit more, here’s the sample code you can download and run to do some basic text analysis. I’ve defined three steps in this code: Importing required packages, defining input text, and analysis. Before importing the packages/ libraries you need in step 1 however, they should be first installed in your machine. This can be done using the Pip command in the command prompt/anaconda prompt like this:  $ pip install wordcloud (If you run into problems with that, the other option is to download an appropriate version of the package’s wheel from here and install it using $pip install C:/some-dir/some-file.whl).

Python code for the three steps is below:

The downloadable ipynb file is available on Github.

Other notes:

  • This post is intended for anyone who wants to start working with Jupyter notebooks, and assumes prior understanding of programming in Python. The Jupyter notebook is another environment to easily work with code, but the coding process is still very traditional. If you’re new to Python programming, this website is a good place to start.
  • You can use multiple versions of Python to run Jupyter notebooks by changing its Kernel (the computational engine which executes the code). I have both Python 2 & Python 3 installed, and I switch between them for different programs as needed.
  • While Jupyter notebooks are mainly used to run Python code, they can also be used to run R programs, which requires R kernel to be installed. The blog post below is a useful guide to do that: https://www.datacamp.com/community/blog/jupyter-notebook-r

Telling stories with data and visualizations – Some key messages

The topic of telling stories from data is huge and probably needs many many hours and books to explain the ideal ways of doing it. But Dr. Roberto Martinez did a great job in giving us a quick introduction to the topic and its pragmatic application in an hour at his talk at the UTS LX lab. It very much aligned with the Connected Intelligence Centre‘s  vision of building staff capacity in data science particularly by keeping human in the center of the data. This post includes my notes from this talk where I summarize some of the key messages.

Humans are producing enormous amounts of data these days. According to recent statistics, 2.5 quintillion bytes of data are created every day and the pace keeps growing. But, there is a stark contrast between data and knowledge – Data by itself means very little, and knowledge is created only when the data is made sense of. We might be drowning in data, but not in knowledge. Roberto compares this abundance of data to oysters and an insight to a pearl. We need to open many oysters to maybe find one pearl.

The rest of the blog is divided into two main sections 1. Data Storytelling, 2. Data visualization, and a few overall key messages that I took away from the talk.

Data Storytelling:

The value of data is not the data itself, but how we present it. This is what makes storytelling really important to present insights from data. It is not about presenting ALL the data we have, but to highlight the main insights from the data that should be noted. It is about finding patterns from the data to make people engaged with the story just like finding hooks in a fictional story. It often operates in conjunction with data visualization to communicate results from data. Check out the list of resources given at the end of this post for detailed reading.

There are a few ways to make the insights clear and pop out when communicating the story from data:

  • The first step is to declutter the data by removing all the noise. This can be done by stripping down all the unwanted information and building up on the useful insights.
  • The next key thing to do is to foreground things that are important. We do not want too much ink/ data that makes the results too complicated to understand.
  • A data story approach can be used merging narrative and visuals together to engage audience and point to key messages from the data (see examples of line graphs annotated this way here). Also check out this interesting article and podcast on the good and bad of storytelling for further reading.

Continue reading “Telling stories with data and visualizations – Some key messages”

London Festival of Learning 2018

I attended the London Festival of Learning this year from June 22nd-30th, which brought together three conferences: the 13th International Conference of the Learning Sciences (ICLS), the Fifth Annual ACM Conference on Learning at Scale (L@S) and the 19th International Conference on Artificial Intelligence in Education (AIED).  It was great to see the convergence of ideas and academics from these three fields that generally work towards enhancing educational practices with technology. I could see overlaps and similarities in the topics of research being studied by these communities, but I also noticed they were divergent in terms of the main foci of their research. The festival was huge with over a 1000 attendees, and also involved edtech companies that wanted to develop evidence-informed products.

Throughout the conferences, I found an emphasis and move towards making more use of human ability and intelligence to augment what artificial intelligence can do for education in many keynotes and talks. This included concepts like giving importance to our internally persuasive voice and the power of negotiation in addition to “datafied” learning, and embracing imperfections from machines by adding in human context. A critical stance on what Artificial Intelligence can and cannot do was seen, with more conversations happening around the ethical use of learner’s data.

(Excuse me for the blurry pictures, I was not in a good spot to take pictures)

In the sessions, I could see a lot of research on developing intelligent tutoring systems, agents, intervention designs and adaptive learning systems for teaching specific skills, and advances made in their techniques. The majority of data comes from online settings i.e, students’ trace data from their usage with such systems. Recently, multi-modal data is getting more attention where sensors and wearables collect data from learner’s physical spaces as well. One best paper award winning work on Teacher-AI hybrid systems showcased the power of mixed-reality systems for real-time classroom orchestration. The cross-over session and the ALLIANCE best paper session showcased interesting research cutting across the three communities; it’s a shame we couldn’t attend both sessions since they ran in parallel.

Simon Knight presented our work on Augmenting Formative Writing Assessment with Learning Analytics: A Design Abstraction Approach at the cross-over session where he explained how we can augment existing good practices with learning analytics, and use design representations for standardizing these learning designs. I presented our poster on studying the revision process in writing in AIED, where I used snapshots of students’ writing data to study their drafting process at certain time intervals. I also participated in the collaborative writing workshop earlier in ICLS where many interesting tools to support writing were discussed. I shared about AcaWriter – a writing analytics tool providing automated feedback on rhetorical moves, developed by the Connected Intelligence Centre, UTS  which is now released open-source.

Overall, it was a great place to learn, network and follow work from related disciplines (with some catching up to do on the presented work, coz we can only be at one place at one time during the parallel sessions). I did feel a bit exhausted a the end of it (maybe I’m better off attending one conference at a time 🙂 ), but I guess that’s natural, and you can’t complain when your brain gets so much to learn in a week!

Preparing for a doctoral consortium

There are many opportunities for doctoral students to participate in a doctoral consortium in the ed-tech research community, amongst others. A Doctoral consortium is usually organized by conferences where graduate students come together to present their work to experts in the field and peers, and get feedback from them. The expert panel might also offer advice on career and other skills. Some conferences also offer Young Researcher’s workshops/ Early Career Workshops which are useful for graduating students and young researchers in the field.

Having attended two doctoral consortium in different conferences, I would recommend PhD students to do it at some point of time.  I found it useful for a number of reasons, so in this post I’m going to list why I think so and how to prepare for a Doctoral consortium – some tips on making the best use of it.

Why participate?

  • Enhancing research skills: It’s a wonderful opportunity to put your thoughts together and think about the big picture of your research. It helps you identify the core ideas of your research and present them succinctly in a limited time. Explaining a potential 60,000 word thesis of your PhD in less than 30 minutes is a great skill to acquire. In some conferences, you might be asked to present a poster explaining your research as well. Also, it is a place where you can actually discuss more about your methodology and design, and not just the results.
  • Expert feedback: It is a great place to get some early feedback (and criticism) on your PhD work and thesis statement. It’s nice to have some extra eyes other than your phd supervisors. You become clear on what your claims can be and what your limitations are. You will be prepared to answer any question and know what to expect as possible questions next time when you present your work to different audiences. Even if you don’t get great advice at all times, you will most likely walk away with a better understanding of what you want to do. And if there’s a certain problem you’re grappling with in your research, you can ask for specific advice.
  • Networking: You meet other PhD Students from closely related fields. Not always do we get a chance to meet students from other universities around the world and know about their research. They are also sailing on the same boat, so it is always good to connect with your peers to get some support, and their feedback on your work. It is also a good opportunity to network with experts in the field and introduce your name in the research community. Who knows, the academic expert you impressed might be the person who gives you a job when you graduate 🙂
  • Financial Support: Most conferences provide some level of financial support for grad students who get accepted to the doctoral consortium. This is especially useful for self-financing students, as it covers registration fees or travel depending on the conference.

Based on my experience and the advice I’ve heard, here are some tips to make the best use of your time at the Doctoral Consortium:

  • Pick the right time to go – Best to go when you have conceptualized your research and done some work, so that you don’t go as an empty slate. The experts want to see what you have thought through so they can give you advice. Also don’t go too late (for example when you are almost submitting your thesis) by which time you can’t make any more changes to your research and thesis.
  • Make a proper submission – Most doctoral consortium require students to make formal submissions which include a short paper describing the research, supporting documents like a letter of support from the supervisor, and sometimes your own statement and CV. They usually look for sharp minds who can benefit from the discussion and contribute to the research community, so make sure you follow the mentioned format while submitting your application with well-written documents.
  • Practise and be ready to explain your research – You are usually provided a limited time to present (15-20 mins), and given that you are attempting to present your whole thesis in this time slot, practise well in advance to highlight the key aspects. Even better if you can present to your local peers and get their advice earlier. Sometimes, we tend to run through some ideas quickly without noticing that they need more emphasis or highlight less important aspects more, which your peers can notice for you.
  • Go prepared with your questions & answers: It is always nice to be prepared with questions to ask advice from experts. If there’s a particular problem you’re grappling with in your research, make sure you point that out and ask for suggestions. This helps you get focused attention on that problem rather than spend a lot of time on other minor things you are  not very interested in. If you want feedback from a specific expert, you can try mentioning that too. Be prepared to face tough questions and criticism on your research work (a good rehearsal before your phd defence). Also, if your peer’s work is previously made available, take some time to read about their research so you can contribute to the discussion and add value with your feedback.

LAK 2018 in Sydney

This post is on the exciting week of the Learning Analytics and Knowledge Conference LAK 2018, held in Sydney. LAK is a prestigious conference dedicated for sharing work in Learning Analytics across the globe. LAK coming down under was something we were looking forward to for quite some time. LAK is in fact the very first international conference I’ve ever attended (back in 2015), so it is always extra special 🙂

I started off with a Writing Analytics workshop, which we organized in Day 1 of LAK. We used a Jupyter notebook which runs Python code to demonstrate the application of text analysis for writing feedback and the pedagogic constructs behind designing such applications for learning analytics. Our aim was to bridge the gap between pedagogic contexts and the technical infrastructure (analytics) by crafting meaningful feedback for students on their writing, and to do so by developing writing analytics literacy. The participants were quite engaged in this hands on approach and we had good discussion on the implications of such Writing Analytics techniques.

The next day, I participated in the Doctoral Consortium, which is a whole day workshop where doctoral students present their work, discuss and receive feedback on their work from experts and other students. To know more about a Doctoral Consortium, read this. My doctoral consortium paper published in the companion proceedings is available here:

The new workshop for school practitioners was of interest to many educators working in K-12 learning analytics applications, and the Hackathon continues to be of wide interest. After the pre-conference events, the main conference officially started with the first keynote by Prof. David Williamson Shaffer on ‘The Importance of Meaning: Going Beyond Mixed Methods to Turn Big Data into Real Understanding’. David talked about how data is not scarce anymore, and to analyze such a sheer volume of data for learning, how we have to go beyond traditional quantitative and qualitative approaches. He gave examples of logical fallacies where statistics is likely to be misused while interpreting the concepts in learning, and introduced the notion of quantitative ethnography which can close the interpretive gap between the model and the data.

If you want to hear the full talk, all the keynotes are available along with the slides here: https://latte-analytics.sydney.edu.au/keynotes/ 

In general, there was great interest in the development of theories around designing dashboards, discussing how to and how not to develop dashboards for students.

Aligning learning analytics with learning design was increasingly emphasized. The demo paper which I presented that day exemplifying this in a Writing Analytics context is here (bonus pic with the supervisors):

The second day of the main conference (aptly on International women’s day) started with Prof. Christina Conati’s keynote on user adaptive visualizations, where she talked about adaptive interactions.

She showed how visualizations can be personalized for users by building user models based on eye tracking features.

Visualization in general was another key topic which gathered growing interest in the LAK community, along with other topics like Discourse analysis and Writing Analytics, many of them moving towards more near real-time applications.

I attended the SOLAR executive meeting for the first time to see what’s happening around SOLAR. It felt great to be part of a very welcoming community of researchers and practitioners. That’s where they announced this:

We also celebrated Women’s day:

It was quite an eventful day ending with the conference banquet in a Sydney harbour cruise.

The final keynote on the last day touched upon a number of criticisms around learning analytics and how we can progress the field further taking into account the key aims of learning analytics.

Multi modal learning analytics, MOOCS, Ethics and Policies, Theories, Self-regulated learning and Co-designing with stakeholders are other areas which continued to be discussed throughout the conference.

And then to wrap it up, happy hour!

To read all the interesting papers from LAK, follow this link.

For more tweets from the awesome LAK community, check #LAK18, #LAK2018, @lak2018syd

 

The changing face of learning and how to adapt to it

This post is based on my notes from Prof. Roger Säljö‘s talk at a Sydney Ideas event, hosted by the University of Sydney. I was undecided at first about attending this talk since it was held on a Valentine’s day evening, but I’m really glad I did 🙂 In his intriguing talk, Prof. Roger shared how the nature of knowledge and learning have changed in our current digital societies compared to previous traditional forms, and how educators should respond to it.

We’ve almost always been finding ways to preserve and communicate knowledge, from scripts and stone age symbols to modern digital libraries. That’s how we learn, grow and improve the society we live in. The Game of Thrones quote about its library town ‘Citadel’ is something I could immediately relate to:

Source: https://scatteredquotes.com/without-us-men-little-better-dogs/

While all societies need to reproduce knowledge for the next generations (like how its been done for ages), the conditions for reproducing the cultural memory is quite different in modern, digital societies. The size and complexity of such knowledge have grown tremendously in modern societies due to technology, which is why it is important to prioritize the skills and knowledge for learning. What is of value for students to learn in the new digital world and what skills they need should be considered. There could be two strategies for thinking about this:

  1. We can preserve what has been done (back-to-basics movement in education)
  2. Or we can think about what might be productive for the future

I’m more inclined towards working for a productive future, considering the new changes (by preserving the traditional elements that are essential, of course).

Read More...

The changing face:

So what has actually changed over the years? Why should we think through these for education NOW? Education has been evolving all the time: from scribal schools over 5000 years ago meant for systematic training of the human mind, to the still relevant act of ‘studying’ which was once a social revolution.  Symbolic technologies are well developed to share a common understanding with all people of the world. In particular, Writing is a literacy that’s probably not dying anytime soon, although its forms may have changed. Text is still the main source of knowledge and is used everyday in many forms including emails, messages, and social media posts. The concepts of schooling have remained stable, although its focus on reproduction (not creativity) and individual as a source of knowledge are changing in recent times.

However, the biggest changes to our society are brought by technology, which has digitized the world. In addition to the growing amount of knowledge in the form of digital data, the conditions of learning have also changed tremendously. A lot of cognitive functions have been externalized and cognitive habits transformed. For instance, we make use of computer software to perform spelling/grammar checks in our everyday writing and even for simple arithmetic calculations (we should probably try to do mental calculations once in a while so that we don’t always need a calculator for 451 * 23). We are dependent on apps for cognitive tasks like remembering and problem solving. Children are starting to learn writing by typing using keyboards, and are moving from passive media consumption to active forms of interaction. We are able to master complex tasks, without understanding the basic steps involved.  There are statistical packages for use today which can help us come up with solutions for highly complex tasks with few lines of code without understanding the sequential steps it involves. Advanced technologies act as a black-box, which cannot be unpacked for education in the classroom: one example from my research context is a machine learning scoring algorithm that doesn’t disclose the features used for calculating these scores of students in a writing task.

Technological changes have made minds hybrid with thinking detours and collaboration with artifacts, which no longer nurture a concentrated mind. The way we look for information has also changed completely with search engines. Google has become our go-to place to seek any information we want, and is available for anyone. There is increased internet use in young children, even on their own. This places huge emphasis on coming up with strategies like restrictions and parental guidance for responsible internet usage by children, and opens a whole new dimension of security. We cannot control the learning trajectory of children from 2 years to 11 years as before, since we don’t know what they learn externally out of class (indirect curriculum). Schools can have no control on external tools and knowledge as it is hard to restrict access to computers at home. One can just hope that such external knowledge children gain is for the good, and guide them to distinguish it from other non desirable content on the internet.

Because the future is digital and there’s no coming back, our duty is to adapt to it the best we can. For this, Prof. Roger emphasizes that the metaphors of learning should shift to respond to the changing environment. Learning should be more performative (rather than reproductive) and focus on learning as design. Learners should be encouraged to participate in and contribute to communities and collective practices, and no longer consider knowledge as an individual asset. With the human mind, interactions with symbolic technologies and communications with people should be relational. Technologies and artificial intelligence should be used with care in education, keeping in mind that “Education is not production, it is not a smoothly running machine”. For young teachers to cope with the advances, they have to learn how to marry the resources to the ambitions of the school, while understanding that technology changes the nature of education, but does not solve the problem. The education system will also have to change assessments to assess the skills that matter the most in the future. While the advances have a role to play in improving learning (E.g. virtual environments where students can experience near reality complex environments), they should also have co-ordination with the teacher to get user perspectives. And for people to accept it more broadly, there should be steps taken to ensure digital literacy. Further, the knowledge, value and skills of an individual should be connected to what technology has to offer. Such design of transparent technology to respond to the natural repertoire of uses will be more relevant for education in the future.

To learn more about Prof. Roger’s work, visit: www.lincs.gu.se

 

 

 

 

 

ICCE 2017 in New Zealand

Last month I attended the 25th International Conference on Computers in Education ICCE 2017 at Christchurch, New Zealand, organised by the Asia-Pacific Society for Computers in Education (APSCE). It was the first time I attended this conference, although I have heard of it previously when I was working in NIE, Singapore. Overall, it was a great experience, and I could see different sub-fields under ‘Computers in Education’ coming together. Being a slightly more extensive field than learning analytics, it helped widen my knowledge beyond my current expertise.

I found the keynote speeches and talks very exciting, and I was tweeting some of my key take-home messages with   tag. Personalized and adaptive learning, learner models and how we can empower learners with technology were some key topics discussed in the keynotes and invited talks:

Emerging technical solutions and capabilities shared in the paper and poster sessions, especially on Virtual Reality, Augmented Reality, Mobile and Sensor technologies were widening the horizon of technologies used in the field of education. New applications of gaming technology used for teaching in many levels of education were quite interesting. Combining multiple forms and modes of data (multimodal data) was another emerging topic in collaborative learning, personalized learning and language education.

The overarching theme of pedagogy and learning were emphasized and questioned along the way, when some talks were focussed more on technology than its appropriate usage in educational settings. I believe that this topic is widely discussed these days in many areas where technology is used for education: an emphasis to go back to the basic aim of improving education, working alongside teachers, with technology as only a helping factor.

In particular, we had fruitful discussions within the Learning Analytics (LA) community on testing the effectiveness of LA applications, providing actionable insights for learners and teachers, creating standards for LA and data ethics issues.

Read More...

I presented a full paper on the “Design and implementation of a pedagogic intervention using Writing Analytics”, where I shared work done with our colleagues at UTS Connected Intelligence Centre (UTS CIC) on exemplifying authentic classroom integration of learning analytics applications. It was well-received and provoked discussion on supporting students in their pedagogic contexts with the right kinds of feedback using analytics.

I also presented a doctoral consortium paper on “Combining automated and peer feedback for effective learning design in Writing practices” based on my main doctoral research idea, where we had discussions on how an embedded human component can add to automated analytic capabilities. I received the APSCE Merit Scholarship of USD500 to help me attend the conference, which is quite special as it is my first external scholarship/award during my PhD😊 I’m also thankful for the VC’s conference fund from UTS and the constant support from my lab UTS CIC at all levels (mentorship and financial support to attend conferences – I attended ALASI at Brisbane just the week before attending this one).

APSCE Merit scholarship_Shibani

 

In general, I could see a good mix of senior and young researchers from the Asia-Pacific region sharing their work enthusiastically and networking with peers from different communities of the broader educational research field. I caught up with some old friends and met some new interesting people too 😊 The hosts of the conference were amazing and everything was well-organized. We were given an introduction to the local culture with a lot of tidbits and entertainment along the way. I noticed a lot of photos being taken both by official photographers as well as the delegates to capture special moments (Is it just me who observed this? I’m super happy anyway to see those pics). The conference banquet dinner and the celebrations for the 25th anniversary of the conference need a special mention, as the past APSCE presidents were paid tribute. Also, watching the traditional Haka being performed during the banquet was a whole new experience. It was definitely a very well-organized conference, with every detail thought of and paid attention to; credits to the local organizing committee. Plus, New Zealand was so beautiful and I got to see some lovely places like these after the conference:

Lake Tekapo
Lake Tekapo

Mount Cook
Mount Cook, New Zealand