Enhancing Explainability in Large Language Models Through Belief Change: A Simulation-Based Approach
Lucas Lisegow,
No information about this author
Ethan Barnes,
No information about this author
Ava Pennington
No information about this author
et al.
Authorea (Authorea),
Journal Year:
2024,
Volume and Issue:
unknown
Published: Aug. 20, 2024
Artificial
intelligence
systems,
particularly
those
deployed
in
high-stakes
environments,
require
a
high
degree
of
transparency
and
explainability
to
ensure
that
their
decisions
can
be
understood
trusted.
Traditional
approaches
enhancing
often
rely
on
post-hoc
methods
fail
fully
capture
the
internal
reasoning
processes
complex
models.
In
this
research,
novel
integration
Belief
Change
Theory
was
employed
address
challenge,
offering
systematic
framework
for
belief
revision
directly
influences
decisionmaking
process
model.
The
proposed
methodology
implemented
Llama
model,
which
modified
incorporate
mechanisms
capable
handling
contradictory
information
generating
coherent
explanations.
Through
series
simulations,
model
demonstrated
significant
improvements
consistency,
accuracy,
overall
explainability,
outperforming
traditional
models
lack
integrated
management
systems.
findings
highlight
potential
not
only
enhance
AI
systems
but
also
provide
foundation
more
dynamic
interactive
forms
interpretability.
research
opens
new
avenues
development
are
both
powerful
accountable,
paving
way
adoption
critical
decision-making
contexts.
Language: Английский
Automated Early Detection of Misinformation on Social Media: A Large Language Model Approach with High-Volume Facebook Data
Noel Ashbourne,
No information about this author
James R. Abernathy,
No information about this author
Alexander Beauchamp
No information about this author
et al.
Published: Aug. 13, 2024
Social
media
platforms
have
become
a
primary
conduit
for
the
rapid
dissemination
of
information,
where
unchecked
spread
misinformation
poses
significant
threat
to
public
discourse
and
societal
well-being.
Introducing
an
innovative
approach
that
leverages
advanced
capabilities
fine-tuned
ChatGPT
model,
this
research
addresses
urgent
need
scalable
accurate
methods
detect
in
real-time
across
vast
digital
landscapes.
The
model
was
meticulously
evaluated
through
series
experiments
demonstrated
its
superior
performance
identifying
misleading
content,
particularly
when
compared
traditional
machine
learning
classifiers
earlier
versions
language
models.
integration
comprehensive
preprocessing
techniques,
alongside
refined
confidence
thresholds
post-processing
rules,
enhanced
model's
ability
process
complex
diverse
datasets,
resulting
highly
reliable
predictions.
findings
underscore
potential
significantly
mitigate
misinformation,
offering
solution
capable
operating
effectively
fast-paced
environment
social
media.
By
advancing
field
detection,
study
provides
critical
insights
tools
can
be
applied
both
practical
domain
content
moderation,
contributing
more
informed
resilient
society.
Language: Английский