Gen AI, History and Historians

by | Jul 11, 2024 | AI, History and Historians, Guest Posts, Teaching Portal: For Teachers | 0 comments

 

In this post Dr Adam Budd, Secretary for Education on the RHS Council, introduces our panel discussion on ‘AI, History and Historians’, which took place on Wednesday 17 July.

Adam’s post, written before the event was held, discusses the opportunities this event provides for historians and students of history to learn about the meaning of ‘artificial intelligence’ in our academic community.

The recording of ‘AI, History and Historians’ is now available to watch or listen again.

 

 

It’s just under one week before we convene our online roundtable on ‘AI, History and Historians’, on 17 July. More than 750 people have signed up; if you haven’t done so, please click here to register. Obviously, this is a hot topic. Scholars, teachers, and students of history want to learn, to listen, and to ask questions about the meaning of ‘artificial intelligence’ for our profession.

You may be asking: why use quotations marks? Maybe because machines can access data but can’t harbour knowledge. They can process data, but can they reflect on its meaning? Do we normally associate ‘intelligence’ with requiring a prompt? What is ‘artificial’ (or not ‘natural’) about AI, its capabilities, and our relationship to it?

Many more questions have guided our plans for this event, as well as our selection of panellists. These include Helen Hastie, who is an expert on robot-human interaction and focusses ‘on trust, transparency, and cognitive load’. How and why does this technology shape our expectations of what machines can do? To what extent does our engagement with AI lead us to adopt new ways to think about trust, authority, and our own sense of autonomy? Matthew Jones has written extensively on the history of AI and on associated technologies, alive to the ways in which ‘data’ seems to anticipate certain kinds of thinking, which frequently entail users adopting rather conventional values—including values users ordinarily reject. Anna-Maria Sichani will help us think about the ethical considerations that should govern our adoption of AI.

At present, very few UK universities provide guidance to instructors on the implications of AI for teaching and assessment—apart from urging students not to use it, and to explain the basis for misconduct officers to penalise students who do.

This roundtable will focus on the meaning of Generative AI for historical teaching and learning; further events may consider AI and historical research. At present, very few UK universities provide guidance to instructors on the implications of AI for teaching and assessment—apart from urging students not to use it, and to explain the basis for misconduct officers to penalise students who do. My own institution hasn’t published guidance to staff or students on how to assess student work that uses GenAI in nearly 18 months; the Quality Assurance Agency (QAA) hasn’t done so in over a year.[1] My students are advised that ‘AI offers a number of benefits’, but its guidance lists only its ‘dangers’, concluding that ‘all work submitted for assessment must be your own work’.

What do these statements mean for students who use the technology? New ways of working require new ways of thinking. Digital media has changed the way we read; it has also changed the way we relate to its outputs (what we used to call ‘texts’). We have been using the phrase ‘born digital’ since 1998: are new ways of reading really new?—or are the regulations unduly old?

Our regulatory language suggests that originality lies in a concept and in its expression, hence its detectability. But since students contribute inputs, sometimes without consent let alone awareness, the outputs they ‘use’ for assessment may include work of their own making—a key criterion of originality.[2] So where does this leave us on the matter of teaching students about ‘transparency’ when using AI, which the QAA asks us to do?[3]

The algorithms that define AI’s calculations seek to ‘optimise’ their outputs on the basis of human inputs, drawn from data across digital platforms supporting machine/human interactions. These may render physiological data, literary tastes, the amount of time we spend on webpages, what we are willing to spend on certain media.[4] We may or may not even know when we are co-creating the digital material that we consume. Regulating let alone defining academic misconduct requires new ways to think about digital engagement. We may need to think the once unthinkable: should we reconsider ‘originality’ as a criterion for an excellent grade?  As historians we mean to study the past, not the present. But it has been a truism since Ranke first invited students around a table and called this a ‘seminar’, that historians have asked each other to reflect critically on our task in the present.[5]

In February of this year, the Higher Education Policy Institute (HEPI) reported that 53% of university students in the UK use GenAI ‘to help them with assessments’; perhaps even more worryingly, 36% of all students use GenAI ‘as a private tutor to explain concepts’

These are urgent concerns, because students make regular and increasing use of the technology, both for their learning and for their assessments. In February of this year, the Higher Education Policy Institute (HEPI) reported that 53% of university students in the UK use GenAI ‘to help them with assessments’; perhaps even more worryingly, 36% of all students use GenAI ‘as a private tutor to explain concepts’, despite the fact that nearly half say they ‘do not know’ whether the technology ‘produces made-up facts, statistics, or citations’.[6] We use the misleading term ‘hallucinations’ to describe the mechanism that produces erroneous outputs, but of course no human mind makes those errors; endowing machines with an imagination may express our own failure to consider this technology with moral and technical seriousness.

Historians may conclude that the best means to proceed is by going back to basics: before submission, students should examine their work to ensure AI really has been a help and not a hindrance. But this too poses a problem: do we even have effective terms to describe errors made by AI and not by a person, so we can describe what this critical process should entail? We have many terms for the human errors we detect in written submissions (from plagiarism to citation oversights to typos) but none for machines. Moreover, when we use an online platform to access scholarly material to check for errors, and we use a search tool to do so, then we find ourselves relying on one kind of AI output to check another. To what extent is such a review exercise ‘original’, given its dependence on machine learning and machine outputs?

When did seeking information from a repository, from a physical or virtual bookshelf, not entail technological mediation?

Was it always thus? When did seeking information from a repository, from a physical or virtual bookshelf, not entail technological mediation? The great lexicographer and critic Samuel Johnson, at work on his famous Dictionary of the English Language (1755), relied on a system of storing quotations that imposed its own shape on the social ordering of language that this book intended. As for originality and the morality that word implies, even this apparently premodern project, which always has been associated with Johnson and his cultural authority, wasn’t written by the great man alone. This vast work was created by a variable team over a period of some nine years, and in a collective revision process that extended for twenty years more: most of Johnson’s collaborators and their methods remain unknown.[7] I can envision a clever student asking me: why can’t we, who shape the Large Language Models that AI draws upon to process its outputs, submit this work as our own? Surely, we can claim some originality?

I very much look forward to our event next week. In the meantime, please feel free to respond to this post with your own views.

 

Footnotes

[1] See “Guidance for students on the use of Generative AI,” University of Edinburgh, March 2023. This guidance anticipates the position outlined by QAA in May 2023: both documents insist on the importance of detecting AI-supported misconduct, and assume a conventional understanding of  “transparency.”

[2] On the importance of discriminating and generative elements of machine learning and output, see S. Vallor, The AI Mirror, (Oxford: Oxford UP, 2024). For an essential discussion on the implications for historical representations of race and its consequences for digital research and social justice, see J. M. Johnson, “Markup Bodies,” Social Text, 137, (Dec. 2018): 57-79.

[3] See “Maintaining Quality and Standards in the ChatGPT era,” QAA, 8 May 2023.

[4] That said, processing of this data leads to measures to fit racial, gendered, and other ideologically-inflected templates. See the classic essays in L. Gitelman, ed. “Raw Data” is an Oxymoron, (Boston: MIT P, 2013);  J. Drucker, “Humanities Approaches to Graphical Display,” Digital Humanities Quarterly, 5 (2011) PDF;  N. Barrowman, “Why Data is Never Raw,” The New Atlantis, 56 (Summer/Fall 2018): 129-35.

[5] Georg Iggers has long argued that Ranke’s innovations lay not in his adoption of critical method (taken from philology) but in his understanding of historical scholarship within a philosophical context that demanded reflection on the spiritual nature of the causalnexus. See G. Iggers, ed. Introduction, Theory and Practice of History by L. von Ranke, (London: Routledge, 2011) xii-xiv.

[6] J. Freeman ‘Provide or punish? Students’ views on generative AI in higher education, HEPI policy note 51, Feb. 2024.

[7] See J. Sledd and G. Kolb, Dr Johnson’s Dictionary: Essays in the Biography of a Book, (Chicago: U of Chicago P, 1955); A. Reddick, The Making of Johnson’s Dictionary, 1746-73, (Cambridge: Cambridge UP, 1990).  

 


 

About the author

 

Dr Adam Budd is Senior Lecturer in Cultural History and Director of Postgraduate Taught Programmes in the School of History, Classics and Archaeology at the University of Edinburgh. He is also Secretary for Education and Chair of the Education Policy Committee for the Royal Historical Society.

As Secretary for Education, Adam is responsible for the Society’s policy on higher education and support for teaching. He co-authored the RHS Report on Race, Ethnicity and Equality (2018) and has been involved in developing merit-based funding initiatives for early-career researchers, in addition to chairing RHS scholarship awards and research prizes. He is active with the Higher Education Academy and has led numerous Widening Participation initiatives.

 


 

Watch the event (held on 17 July 2024) discussed in this post

Video and audio recordings of the Royal Historical Society’s 17 July event — ‘AI, History and Historians’ — are now available.

This panel discussion brought together a panel of experts to consider the opportunities and challenges of new AI technology in the field of History.

 

 

Listen to this event

 

With contributions from our panellists

  • Helen Hastie (Professor of Human-Robot Interaction at the University of Edinburgh and Head of School of Informatics at the University of Edinburgh)
  • Matthew L. Jones (Smith Family Professor of History at Princeton University and co-author of How Data Happened, a history of the science, politics, and power of data, statistics, and machine learning from the 1800s to the present (2023)
  • Anna-Maria Sichani (Postdoctoral Research Associate at the Digital Humanities Research Hub, School of Advanced Study, University of London)
  • Jane Winters (Professor of Digital Humanities at the School of Advanced Study, University of London, and Vice-President, Publications, for the Royal Historical Society, chair)

Follow This Blog

Subscribe

* indicates required



Categories