Explainable AI Chatbots Towards XAI ChatGPT: A Review
Heliyon,
Год журнала:
2025,
Номер
11(2), С. e42077 - e42077
Опубликована: Янв. 1, 2025
Advances
in
artificial
intelligence
(AI)
have
had
a
major
impact
on
natural
language
processing
(NLP),
even
more
so
with
the
emergence
of
large-scale
models
like
ChatGPT.
This
paper
aims
to
provide
critical
review
explainable
AI
(XAI)
methodologies
for
chatbots,
particular
focus
Its
main
objectives
are
investigate
applied
methods
that
improve
explainability
identify
challenges
and
limitations
within
them,
explore
future
research
directions.
Such
goals
emphasize
need
transparency
interpretability
systems
build
trust
users
allow
accountability.
While
integrating
such
interdisciplinary
methods,
as
hybrid
combining
knowledge
graphs
ChatGPT,
enhancing
explainability,
they
also
highlight
industry
needs
user-centred
design.
will
be
followed
by
discussion
balance
between
performance,
then
role
human
judgement,
finally
verifiable
AI.
These
avenues
through
which
insights
can
used
guide
development
transparent,
reliable
efficient
chatbots.
Язык: Английский
Explainable artificial intelligence models in intrusion detection systems
Engineering Applications of Artificial Intelligence,
Год журнала:
2025,
Номер
144, С. 110145 - 110145
Опубликована: Янв. 31, 2025
Designing High-Impact Experiments for Human–Autonomy / AI Teaming
Journal of Cognitive Engineering and Decision Making,
Год журнала:
2025,
Номер
unknown
Опубликована: Март 18, 2025
The
potential
to
create
autonomous
teammates
that
work
alongside
humans
has
increased
with
continued
advancements
in
AI
and
technology.
Research
human–AI
teams
human–autonomy
(HATs)
seen
an
influx
of
new
diverse
researchers
from
human
factors,
computing,
teamwork,
yielding
one
the
most
interdisciplinary
domains
modern
research.
However,
HAT
domain’s
nature
can
make
design
research,
especially
experiments,
more
complex,
may
not
fully
grasp
numerous
decisions
required
perform
high-impact
To
aid
designing
this
article
itemizes
four
initial
decision
points
needed
form
a
experiment:
deciding
on
research
question,
team
composition,
environment,
data
collection.
For
each
point,
discusses
these
practice,
providing
related
works
guide
toward
different
options
available
them.
These
are
then
synthesized
through
actionable
recommendations
future
researchers.
contribution
will
increase
impact
knowledge
experiments.
Язык: Английский
Integrating Large Language Models into Medication Management in Remote Healthcare: Current Applications, Challenges, and Future Prospects
Systems,
Год журнала:
2025,
Номер
13(4), С. 281 - 281
Опубликована: Апрель 10, 2025
The
integration
of
large
language
models
(LLMs)
into
remote
healthcare
has
the
potential
to
revolutionize
medication
management
by
enhancing
communication,
improving
adherence,
and
supporting
clinical
decision-making.
This
study
aims
explore
role
LLMs
in
management,
focusing
on
their
impact.
paper
comprehensively
reviews
existing
literature,
medical
LLM
cases,
commercial
applications
healthcare.
It
also
addresses
technical,
ethical,
regulatory
challenges
related
use
artificial
intelligence
(AI)
this
context.
review
methodology
includes
analyzing
studies
applications,
comparing
impact,
identifying
gaps
for
future
research
development.
reveals
that
have
shown
significant
communication
between
patients
providers,
adherence
monitoring,
decision-making
management.
Compared
traditional
reminder
systems,
AI
systems
a
14%
higher
rate
rates
pilot
studies.
However,
there
are
notable
challenges,
including
data
privacy
concerns,
system
issues,
ethical
dilemmas
AI-driven
decisions
such
as
bias
transparency.
Overall,
offers
comprehensive
analysis
both
transformative
key
be
addressed.
provides
insights
policymakers,
researchers
optimizing
Язык: Английский
Beyond principlism: practical strategies for ethical AI use in research practices
AI and Ethics,
Год журнала:
2024,
Номер
unknown
Опубликована: Окт. 8, 2024
Язык: Английский
Friends or Foes? Exploring the Framing of Artificial Intelligence Innovations in Africa-Focused Journalism
Journalism and Media,
Год журнала:
2024,
Номер
5(4), С. 1749 - 1770
Опубликована: Ноя. 18, 2024
The
rise
and
widespread
use
of
generative
AI
technologies,
including
ChatGPT,
Claude,
Synthesia,
DALL-E,
Gemini,
Meta
AI,
others,
have
raised
fresh
concerns
in
journalism
practice.
While
the
development
represents
a
source
hope
optimism
for
some
practitioners,
journalists
editors,
others
express
cautious
outlook
given
possibilities
its
misuse.
By
leveraging
Google
News
aggregator
service,
this
research
conducts
content
thematic
analysis
Africa-focused
journalistic
articles
that
touch
on
impacts
artificial
intelligence
technology
Findings
indicate
that,
while
coverage
is
predominantly
positive,
tone
reflects
news
industry
cautiously
navigating
integration
AI.
Ethical
regarding
were
frequently
highlighted,
which
indicates
significant
apprehension
part
outlets.
A
close
assessment
views
presented
smaller
portion
reviewed
revealed
sense
unease
around
conversation
power
hands
tech
giants.
impact
financial
stability
media
outlets
was
framed
as
minimal
at
present,
suggesting
neutral,
wait-and-see
position
Our
quoted
sources
professionals
experts
emerge
most
vocal
voices
shaping
narrative
AI’s
practical
applications
technical
capabilities
continent.
Язык: Английский
Responsible integration of AI in academic research: detection, attribution, and documentation
SSRN Electronic Journal,
Год журнала:
2023,
Номер
unknown
Опубликована: Янв. 1, 2023
The
advent
of
advanced
generative
AI
marks
a
pivotal
moment
in
psychological
science
and
academia
at
large.
This
commentary
advocates
for
leading
organizations,
such
as
the
American
Psychological
Association
(APA)
Science
(APS),
to
spearhead
comprehensive
ethical
guidelines
use
research
publishing.
We
argue
that
should
be
permitted—and
indeed
encouraged—to
augment
human
knowledge
generation
dissemination,
serving
scholarly
aid.
Properly
regulated,
can
enhance
productivity,
creativity,
discovery
without
compromising
rigor
or
integrity.
However,
key
issues
attribution,
transparency,
reproducibility,
preventing
misuse
necessitate
clear
standards
oversight.
examine
appropriate
attribution
contributions
authorship,
effective
documentation
practices
ensure
safeguards
against
potential
misuse.
call
nuanced
guidelines—not
blanket
prohibition—to
responsibly
integrate
into
research,
puts
forth
specific
transparency
reproducibility.
Язык: Английский
Ethical Risks and Future Direction in Building Trust for Large Language Models Application under the EU AI Act
Опубликована: Дек. 2, 2024
LLMs
are
being
used
in
an
increasing
number
of
AI
applications,
raising
important
ethical
considerations
for
which
comprehensive
guidelines
must
be
framed.
have
equal
measures
power
and
risks
bias,
privacy
breaches,
lack
transparency,
the
risk
model
collapse,
eroding
trustworthiness
LLM
application.
This
research
examined
pertinent
provisions
building
trust
applications
under
EU
ACT,
came
into
force
on
August
2,
2024.
Thus,
by
concentrating
compliance
aspects
related
to
Act,
encompassed
all
relevant
integration
values
with
legal
norms.
The
also
offered
practical
future
directions
application
organisations,
emphasising
need
perform
audits,
engage
stakeholders,
monitor
continuously,
enhance
literacy.
goal
is
ensure
that
deployed
responsibly,
transparently,
a
manner
upholds
public
trust,
thus
contributing
enhancing
overall
applications.
Язык: Английский