Dialogs
generated
by
chatbots
may
contain
unethical
and
offensive
language
that
can
negatively
affect
users,
the
service,
society.
Existing
methods
for
automatically
detecting
are
not
effective
chat
data,
which
is
short
multi-turn
hence
requires
understanding
subtle
context
behind
language.
We
introduce
a
new
dataset
from
real
human-chatbot
conversations
with
context-aware
annotations
identify
kinds
of
only
in
certain
context.
propose
neural
network
model
CALIOPER
(Context-Aware
modeL
Identifying
Offensive
using
Pre-trained
Encoder
Retrieval),
uses
encoder
attention
mechanism
to
incorporate
previous
messages
retrieve
relevant
information
implicit
offensiveness.
Experimental
results
show
performs
well
on
dialog
par-ticularly
context-dependent
This
work
contributes
making
safer
chatbot
ecosystem
advancing
techniques
detect
data.
(Disclaimer:
contains
profanity
due
study
topic,
we
replace
*
marks.)
Technological Forecasting and Social Change,
Journal Year:
2023,
Volume and Issue:
199, P. 123076 - 123076
Published: Dec. 14, 2023
With
the
continuous
intervention
of
AI
tools
in
education
sector,
new
research
is
required
to
evaluate
viability
and
feasibility
extant
platforms
inform
various
pedagogical
methods
instruction.
The
current
manuscript
explores
cumulative
published
literature
date
order
key
challenges
that
influence
implications
adopting
models
Education
Sector.
researchers'
present
works
both
favour
against
AI-based
applications
within
Academic
milieu.
A
total
69
articles
from
a
618-article
population
was
selected
diverse
academic
journals
between
2018
2023.
After
careful
review
articles,
presents
classification
structure
based
on
five
distinct
dimensions:
user,
operational,
environmental,
technological,
ethical
challenges.
recommends
use
ChatGPT
as
complementary
teaching-learning
aid
including
need
afford
customized
optimized
versions
tool
for
teaching
fraternity.
study
addresses
an
important
knowledge
gap
how
enhance
educational
settings.
For
instance,
discusses
interalia
range
AI-related
effects
learning
creative
prompts,
training
datasets
genres,
incorporation
human
input
data
confidentiality
elimination
bias.
concludes
by
recommending
strategic
solutions
emerging
identified
while
summarizing
ways
encourage
wider
adoption
other
sector.
insights
presented
this
can
act
reference
policymakers,
teachers,
technology
experts
stakeholders,
facilitate
means
sector
more
generally.
Moreover,
provides
foundation
future
research.
Journal of Medical Internet Research,
Journal Year:
2023,
Volume and Issue:
25, P. e51712 - e51712
Published: Sept. 29, 2023
Artificial
intelligence
chatbot
research
has
focused
on
technical
advances
in
natural
language
processing
and
validating
the
effectiveness
of
human-machine
conversations
specific
settings.
However,
real-world
chat
data
remain
proprietary
unexplored
despite
their
growing
popularity,
new
analyses
uses
effects
mitigating
negative
moods
are
urgently
needed.In
this
study,
we
investigated
whether
how
artificial
chatbots
facilitate
expression
user
emotions,
specifically
sadness
depression.
We
also
examined
cultural
differences
depressive
among
users
Western
Eastern
countries.This
study
used
SimSimi,
a
global
open-domain
social
chatbot,
to
analyze
152,783
conversation
utterances
containing
terms
"depress"
"sad"
3
countries
(Canada,
United
Kingdom,
States)
5
(Indonesia,
India,
Malaysia,
Philippines,
Thailand).
Study
1
reports
findings
people
talk
about
depression
based
Linguistic
Inquiry
Word
Count
n-gram
analyses.
In
2,
classified
into
predefined
topics
using
semisupervised
classification
techniques
better
understand
types
prevalent
chats.
then
identified
distinguishing
features
chat-based
discourse
disparity
between
users.Our
revealed
intriguing
differences.
Chatbot
indicated
stronger
emotions
than
(positive:
P<.001;
negative:
P=.01);
for
example,
more
words
associated
with
(P=.01).
were
likely
share
vulnerable
such
as
mental
health
(P<.001),
group
had
greater
tendency
discuss
sensitive
swear
(P<.001)
death
(P<.001).
addition,
when
talking
chatbots,
expressed
differently
other
platforms.
Users
open
expressing
emotional
vulnerability
related
or
sad
(74,045/148,590,
49.83%)
media
(149/1978,
7.53%).
tended
not
broach
that
require
support
from
others,
seeking
advice
daily
life
difficulties,
unlike
media.
acted
anticipation
conversational
agents
exhibit
active
listening
skills
foster
safe
space
where
they
can
openly
states
depression.The
highlight
potential
chatbot-assisted
support,
emphasizing
importance
continued
policy-wise
efforts
improve
interactions
those
need
assistance.
Our
indicate
possibility
providing
helpful
information
moods,
especially
who
have
difficulty
communicating
humans.
Journal of Medical Internet Research,
Journal Year:
2024,
Volume and Issue:
26, P. e56930 - e56930
Published: April 12, 2024
Background
Chatbots,
or
conversational
agents,
have
emerged
as
significant
tools
in
health
care,
driven
by
advancements
artificial
intelligence
and
digital
technology.
These
programs
are
designed
to
simulate
human
conversations,
addressing
various
care
needs.
However,
no
comprehensive
synthesis
of
chatbots’
roles,
users,
benefits,
limitations
is
available
inform
future
research
application
the
field.
Objective
This
review
aims
describe
characteristics,
focusing
on
their
diverse
roles
pathway,
user
groups,
limitations.
Methods
A
rapid
published
literature
from
2017
2023
was
performed
with
a
search
strategy
developed
collaboration
sciences
librarian
implemented
MEDLINE
Embase
databases.
Primary
studies
reporting
chatbot
benefits
were
included.
Two
reviewers
dual-screened
results.
Extracted
data
subjected
content
analysis.
Results
The
categorized
into
2
themes:
delivery
remote
services,
including
patient
support,
management,
education,
skills
building,
behavior
promotion,
provision
administrative
assistance
providers.
User
groups
spanned
across
patients
chronic
conditions
well
cancer;
individuals
focused
lifestyle
improvements;
demographic
such
women,
families,
older
adults.
Professionals
students
also
alongside
seeking
mental
behavioral
change,
educational
enhancement.
chatbots
classified
improvement
quality
efficiency
cost-effectiveness
delivery.
identified
encompassed
ethical
challenges,
medicolegal
safety
concerns,
technical
difficulties,
experience
issues,
societal
economic
impacts.
Conclusions
Health
offer
wide
spectrum
applications,
potentially
impacting
aspects
care.
While
they
promising
for
improving
quality,
integration
system
must
be
approached
consideration
ensure
optimal,
safe,
equitable
use.
BioMedInformatics,
Journal Year:
2024,
Volume and Issue:
4(1), P. 837 - 852
Published: March 14, 2024
This
review
explores
the
transformative
integration
of
artificial
intelligence
(AI)
and
healthcare
through
conversational
AI
leveraging
Natural
Language
Processing
(NLP).
Focusing
on
Large
Models
(LLMs),
this
paper
navigates
various
sections,
commencing
with
an
overview
AI’s
significance
in
role
AI.
It
delves
into
fundamental
NLP
techniques,
emphasizing
their
facilitation
seamless
conversations.
Examining
evolution
LLMs
within
frameworks,
discusses
key
models
used
healthcare,
exploring
advantages
implementation
challenges.
Practical
applications
conversations,
from
patient-centric
utilities
like
diagnosis
treatment
suggestions
to
provider
support
systems,
are
detailed.
Ethical
legal
considerations,
including
patient
privacy,
ethical
implications,
regulatory
compliance,
addressed.
The
concludes
by
spotlighting
current
challenges,
envisaging
future
trends,
highlighting
potential
reshaping
interactions.
Life,
Journal Year:
2023,
Volume and Issue:
13(5), P. 1130 - 1130
Published: May 5, 2023
The
inclusion
of
chatbots
is
potentially
disruptive
in
society,
introducing
opportunities,
but
also
important
implications
that
need
to
be
addressed
on
different
domains.
aim
this
study
examine
in-depth,
by
mapping
out
their
technological
evolution,
current
usage,
and
potential
applications,
emerging
problems
within
the
health
domain.
examined
three
points
view.
first
point
view
traces
evolution
chatbots.
second
reports
fields
application
chatbots,
giving
space
expectations
use
expected
benefits
from
a
cross-domain
view,
affecting
third
main
analysis
state
domain
based
scientific
literature
represented
systematic
reviews.
overview
identified
topics
greatest
interest
with
opportunities.
revealed
for
initiatives
simultaneously
evaluate
multiple
domains
all
together
synergistic
way.
Concerted
efforts
achieve
are
recommended.
It
believed
monitor
both
process
osmosis
between
other
sectors
domain,
as
well
can
create
psychological
behavioural
an
impact
Heliyon,
Journal Year:
2024,
Volume and Issue:
10(4), P. e25754 - e25754
Published: Feb. 1, 2024
The
impact
of
the
coronavirus
disease
2019
(COVID-19)
pandemic
on
everyday
livelihood
people
has
been
monumental
and
unparalleled.
Although
vastly
affected
global
healthcare
system,
it
also
a
platform
to
promote
develop
pioneering
applications
based
autonomic
artificial
intelligence
(AI)
technology
with
therapeutic
significance
in
combating
pandemic.
Artificial
successfully
demonstrated
that
can
reduce
probability
human-to-human
infectivity
virus
through
evaluation,
analysis,
triangulation
existing
data
spread
virus.
This
review
talks
about
modern
robotic
automated
systems
may
assist
spreading
In
addition,
this
study
discusses
intelligent
wearable
devices
how
they
could
be
helpful
throughout
COVID-19
International Journal of Information Management,
Journal Year:
2024,
Volume and Issue:
80, P. 102835 - 102835
Published: Aug. 30, 2024
Imagine
a
world
where
chatbots
are
the
first
responders
to
crises,
efficiently
addressing
concerns
and
providing
crucial
information.
ChatGPT
has
demonstrated
capability
of
GenAI
(Generative
Artificial
Intelligence)-powered
when
deployed
answer
crisis-related
questions
in
timely
cost-efficient
manner,
thus
replacing
humans
crisis
communication.
However,
public
reactions
such
messages
remain
unknown.
To
address
this
problem,
study
recruited
participants
(N1
=
399,
N2
189,
N3
121)
conducted
two
online
vignette
experiments
qualitative
survey.
The
results
suggest
that,
organizations
fail
handle
requests,
stakeholders
exhibit
higher
satisfaction
lower
responsibility
attribution
instructing
(vs.
adjusting)
information,
as
they
perceived
be
more
competent.
satisfy
that
provide
adjusting
information)
lead
due
competence.
second
experiment
involving
emergency
scenario
reveals
regardless
information
provided
(instructing
or
adjusting),
greater
positive
attitudes
toward
high-competence
low-competence)
chatbots.
further
confirms
experimental
findings
offers
insights
improve
These
contribute
literature
by
extending
situational
communication
theory
nonhuman
touchpoints
deeper
understanding
using
through
lens
machine
heuristics.
also
practical
guidance
for
strategically
integrate
human
agents
management
based
on
context.