In an age where generative AI is becoming increasingly sophisticated, the potential for fraud and misinformation has reached unprecedented levels. This keynote will begin with a personal case study exploring how the speaker became the target of a generative AI scam, highlighting the convincing nature of these deceptions. Building upon this experience, the talk will delve into the broader implications of generative AI for the spread of misinformation and the erosion of trust in online spaces. As the line between truth and falsehood becomes increasingly blurred, the role of education in equipping individuals with the critical thinking skills needed to navigate this complex landscape becomes paramount. The keynote will argue that empowering postgraduate social science researchers with the tools to critically engage with generative AI is not only essential for their own work but also for the wider fight against misinformation.
7. Hey wait, I know what a DMCA
notice looks like 🤔
• Their e-mail wasn’t
written as a legal
document
• DMCA notices go to the
web host, not the
content producer
• The image in question
was from a royalty free
image site
• They’re asking me to
insert a URL into my
post
• The legal section of the
letter is nonsense
• The firm and their
14. What’s going on here?
• Blackhat SEO involves using unethical and/or
illegal means to increase the search engine
visibility of clients.
• GAI makes it possible to produce a website and
supporting materials to legitimize these
speculative claims in minutes
• The costs of undertaking this sort of activity
effectively have shrunk rapidly: time,
expertise, money etc.
• This is disinformation: strategic deception
intended to promote political or commercial
goals.
• As opposed to misinformation: incorrect or
16. Hamilton, Pierce &
Lee: Where
Tradition Meets
Innovation
Welcome to Hamilton, Pierce
& Lee, a premier law firm
nestled in the heart of
Boston, Massachusetts. With
a storied history dating
back over a century, our
firm has established itself
as a beacon of legal
excellence, tradition, and
innovation. Our team of
distinguished attorneys is
dedicated to providing top-
tier legal services across a
broad spectrum of practice
17. Even if the
safeguards
were foolproof
• There are still open source GAI
tools which can be run locally
with minimal expertise
• In fact ChatGPT can give you
instructions and even write you
bespoke python code to support
you in doing this
• The means of disinformation are
now freely available to anyone
who is willing to spend even a
small amount of time learning to
act on them
22. The next phase of
post-truth society
• The line between truth and falsehood is
becoming increasingly blurred
• Fake evidence can be made to seem real.
Real evidence can be explained away as
fake.
• This leads to a breakdown of trust in
representations. How do we know what to
believe?
• Social media has already led to a post-
truth environment in which there's no
longer a consensus on factual matters
• The rapid uptake of synthetic media
(text, audio, video, images) is likely
to radically intensify those problems in
the digital public sphere
23. Higher education has a
crucial role to play
• There are forms of digital
literacy which protect against
these challenges: understanding
generative AI, recognising red
flags, evaluating sources and
analysing claims
• Higher education generates
expertise (through research) and
experts (through teaching)
leaving it crucial in a post-
truth world (Harrison and
Luckett 2019)
• This is why academics need to
understand how to navigate the
24. Unfortunately generative AI is
already contributing to declining
standards in academic
publishing...
• There are existing
problems with a
'publish or perish'
culture and the rise
of 'predatory
publishers' which
target anxious
academics
• But when academics
face sustained
pressures on their
time and energy, it
can be tempting to
25. The problem is fundamentally a
matter of trust
• At the heart of post-truth is a breakdown of
trust in expert knowledge. Experts are seen as
partial and self-interested.
• It's impractical to suggest we ban generative
AI use by academics (unenforceable and self-
defeating) which means we have to regulate its
use.
• By developing shared standards for reflexive
and accountable use of generative AI we can
retain trust in the knowledge we produce
• This helps fortify trust in wider society.
Responsible practice in higher education can't
26. What does this mean in
practice?
• Academics should be trained in responsible use
of generative AI:
oUsing it as a thinking tool and administrative
assistant rather than a system which will do writing
and analysis for you
oReporting on your use of generative AI in
publications and research projects
oEnsuring AI literacy so the fundamental limitations
of this technology are widely understood e.g. it
will never be fully reliable, particularly for
factual information
oDevelop communities of practice in which colleagues
can share and reflect on their use of these systems
27. We need to be ready for
continual development of the
technology