I’ve recently had my writing analytics work published at the 21st international conference on artificial intelligence in education (AIED 2020) where the theme was “Augmented Intelligence to Empower Education”. It is a short paper describing a text analysis and visualisation method to study revisions. It introduced ‘Automated Revision Graphs’ to study revisions in short texts at a sentence level by visualising text as graph, with open source code.
I did a short introductory video for the conference, which can be viewed below:
I also had another paper I co-authored on multi-modal learning analytics lead by Roberto Martinez, which received the best paper award in the conference. The main contribution of the paper is a set of conceptual mappings from x-y positional data (captured from sensors) to meaningful measurable constructs in physical classroom movements, grounded in the theory of Spatial Pedagogy. Great effort by the team!
I came across this review article on writing tools published in 2019, and wanted to make some quick notes to come back to in this post. I’m following the usual format I use for article notes which summarizes the gist of a paper with short descriptions under respective headers. I had a few thoughts on what I thought the paper missed, which I will also describe in this post.
To present a review of the technologies designed to support writing instruction in secondary and higher education.
Writing tools collected from two sources: 1) Systematic search in literature databases and search engines, 2) Responses from the online survey sent to research communities on writing instruction.
44 tools selected for fine-grained analysis.
Academic Vocabulary Article Writing Tool AWSuM C-SAW (Computer-Supported Argumentative Writing) Calliope Carnegie Mellon prose style tool CohVis Corpuscript Correct English (Vantage Learning) Criterion De-Jargonizer Deutsch-uni online DicSci (Dictionary of Verbs in Science) Editor (Serenity Software) escribo Essay Jack Essay Map Gingko Grammark Klinkende Taal Lärka Marking Mate (standard version) My Access! Open Essayist Paper rater PEG Writing Rationale RedacText Research Writing Tutor Right Writer SWAN (Scientific Writing Assistant) Scribo – Research Question and Literature Search Tool StyleWriter Thesis Writer Turnitin (Revision Assistant) White Smoke Write&Improve WriteCheck Writefull
Tools intended solely for primary and secondary education, since the main focus of the paper was on higher education.
Tools with the sole focus on features like grammar, spelling, style, or plagiarism detection were excluded.
Technologies without an instructional focus, like pure online text editors and tools, platforms or content management systems excluded.
I have my concerns in the way tools were included for this analysis, particularly because some key tools like AWA/ AcaWriter, Writing Mentor, Essay Critic, and Grammarly were not considered. This is one of the main limitations I found in the study. It is not clear how the tools were selected in the systematic search as there is no information about the databases and keywords used for the search. The way tools focusing on higher education were picked is not explained as well.
In this post, I’m presenting a summary of my review on tools for automatically analyzing rhetorical structures from academic writing.
The tools considered are designed to cater to different users and purposes. AWA and RWT aim to provide feedback for improving students’ academic writing. Mover and SAPIENTA on the other hand, are to help researchers identify the structure of research articles. ‘Mover’ even allows users to give a second opinion on the classification of moves and add new training data (This can lead to a less accurate model if students with less expertise add potentially wrong training data). However, these tools have a common thread and fulfill the following criteria:
They look at scientific text – Full research articles, abstracts or introductions. Tools to automate argumentative zoning of other open text (Example) are not considered.
They automate the identification of rhetorical structures (zones, moves) in research articles (RA) with sentence being the unit of analysis.
They are broadly based on the Argumentative Zoning scheme by Simone Teufel or the CARS model by John Swales (Either the original schema or modified version of it).
Available for download as a stand alone java application or can be accessed as a web service. Sample screenshot of tagged output from SAPIENTA web service below:
The general aim of the schemes used is to be applicable to all academic writing and this has been successfully tested across data from different disciplines. A comparison of the schemes used by the tools is shown in the below table:
Source & Description
AWA Analytical scheme (Modified from AZ for sentence level parsing)
Modified CARS model
-three main moves and further steps
1. Establish a territory
-Review previous research
2. Establish a niche
-Indicate a gap
-Continue a tradition
3. Occupy the niche
-Indicate RA structure
Modified CARS model
-3 moves, 17 steps
Move 1. Establishing a territory
-1. Claiming centrality
-2. Making topic generalizations
-3. Reviewing previous research
Move 2. Identifying a niche
-4. Indicating a gap
-5. Highlighting a problem
-6. Raising general questions
-7. Proposing general hypotheses
-8. Presenting a justification
Move 3. Addressing the niche
-9. Introducing present research descriptively
-10. Introducing present research purposefully
-11. Presenting research questions
-12. Presenting research hypotheses
-13. Clarifying definitions
-14. Summarizing methods
-15. Announcing principal outcomes
-16. Stating the value of the present research
-17. Outlining the structure of the paper
finer grained AZ scheme
-CoreSC scheme with 11 categories in the first layer
The tools are built on different data sets and methods for automating the analysis. Most of them use manually annotated data as a standard for training the model to automatically classify the categories. Details below:
Any research writing
NLP rule based - Xerox Incremental Parser (XIP) to annotate rhetorical functions in discourse.
Supervised learning - Naïve Bayes classifier with data represented as bag of clusters with location information.
Supervised learning using Support Vector Machine (SVM) with n-dimensional vector representation and n-gram features.
Supervised learning using SVM with sentence aspect features and Sequence Labelling using Conditional Random Fields (CRF) for sentence dependencies.
SciPo tool helps students write summaries and introductions for scientific texts in Portuguese.
Another tool CARE is a word concordancer used to search for words and moves from research abstracts- Summary notes here.
Feedback on writing is seen to improve students’ writing, but the process is resource intensive.
Possible options to reduce the workload in giving feedback:
Direct feedback using technology assisted approaches (from grammar checks to complex computational linguistics).
Peer Review [Considered in this paper].
Good feedback from a group of peers is found to be as useful as the instructor’s feedback and even weaker writers are seen to provide useful feedback to stronger writers (See references in original paper).
When providing feedback on other students’ work, students become mindful of the mistakes and improve their own writing.
Some web-based peer review systems: PeerMark in turnitin.com, SWoRD (used in this study) and Calibrated Peer Review.
Challenge lies in the form of feedback provided by peers – peer feedback might not be in a form useful to make revisions. Key features identified to aid revisions:
Localized information (Providing exact location details like paragraph, page numbers or quotations).
Concrete solution (Suggesting possible solution rather than just pointing the problem).
Research problem: Studying peer review is hard with a large amount of feedback data.
Practical problem: Identifying useful feedback for students and possible interventions to help them provide good feedback.
To automatically process peer feedback and identify the presence or absence of the two key features (Providing feedback on feedback for students and automatically coding feedback for researchers).
Refer prototype shown in figure 1 of the original paper that suggests students to provide localized and explicit solutions.
Technical – How? (Details explained in study 1 and study 2)
Building a domain lexicon from common unigrams and bigrams in student papers.
Counting basic features like domain words, modals, negations, overlap between comment and paper etc. from each feedback.
Creating a logic model to identify the type of feedback (Contains localization information or not/ contains explicit solution or not) – classification task in machine learning.
Method and Results:
Study 1 – Localization Detection:
Each feedback comment represented as a vector of the four attributes below:
regularExpressionTag: Regular expressions to match phrases that use location in a comment (E.g. “on page 5”).
#domainWord: Counting the number of domain-related words in a comment (based on the domain lexicon gathered from frequent terms in student papers).
sub-domain-obj, deDeterminer: Extracting syntactic attributes (sub-domain-obj) and count of words like “this, that, these, those” which are demonstrative determiners.
windowSize, #overlaps: Extracting the length of matching words from the document to identify quotes (windowSize) and words overlapped.
Weka models to automatically code localization information. The decision tree model had better accuracy (77%, recall 82%, precision 73%) in predicting if a feedback was localized or not. To refer the rules that made up the decision tree, take a look at Figure 2 of the original paper.
Study 2 – Solution Detection:
Feedback comment represented as vectors using the three types of attributes(Refer table 2 in the original paper for details).
Simple features like word count and the order of comment in overall feedback.
Essay attributes to capture the relationship between the comment and the essay and domain topics.
Keyword attributes semi-automatically learned based on semantic and syntactic functions.
Logistic Regression model to detect the presence/ absence of explicit solutions (accuracy 83%, recall 91%, precision 83%). Domain-topic words followed by suggestions were highly associated with prediction. Detailed coefficients of attributes predicting presence of solution can be referred in Table 3 of the original paper.
Study 3: Can Research Rely on Automatic Coding?
Comparing automatically coded data to hand coded data to see if the accuracy is sufficiently high for practical implementation.
Helpfulness ratings by peers and 2 experts (content, writing experts) on peer comments at a review level.
To account for expert ratings:
Regression analysis using feedback type proportions (praise only comments, summary only comments, problem/solution containing comments), proportion localized critical comments, and proportion solution providing comments as predictors.
10 fold cross validation – SVM best fit.
To check if same models are built using machine coded and hand coded data – 10 stepwise regressions. Refer Table 4 in the original paper to see the feedback features commonly included in the model by the different raters – Different features were helpful for different raters.
Overall regression model is similar to hand coded localization data (Most of the positivity, solution and localization were similar between hand coding and automatic coding).
Predictive models for detecting localization and solution information are statistical tools and do not provide deep content insights.
To be integrated into SWoRD to provide real time feedback on comments.
Technical note: Comments were already pre-processed – segmented into idea units by hand; data split by hand into comment type (summary, praise, criticism).