Kosin Medical Journal,
Год журнала:
2024,
Номер
39(4), С. 229 - 237
Опубликована: Дек. 6, 2024
The
integration
of
artificial
intelligence
(AI)
technologies
into
medical
research
introduces
significant
ethical
challenges
that
necessitate
the
strengthening
frameworks.
This
review
highlights
issues
privacy,
bias,
accountability,
informed
consent,
and
regulatory
compliance
as
central
concerns.
AI
systems,
particularly
in
research,
may
compromise
patient
data
perpetuate
biases
if
they
are
trained
on
nondiverse
datasets,
obscure
accountability
owing
to
their
“black
box”
nature.
Furthermore,
complexity
role
affect
patients’
not
fully
grasp
extent
involvement
care.
Compliance
with
regulations
such
Health
Insurance
Portability
Accountability
Act
General
Data
Protection
Regulation
is
essential,
address
liability
cases
errors.
advocates
a
balanced
approach
autonomy
clinical
decisions,
rigorous
validation
ongoing
monitoring,
robust
governance.
Engaging
diverse
stakeholders
crucial
for
aligning
development
norms
addressing
practical
needs.
Ultimately,
proactive
management
AI’s
implications
vital
ensure
its
healthcare
improves
outcomes
without
compromising
integrity.
International Journal of Scientific Research in Science Engineering and Technology,
Год журнала:
2024,
Номер
11(5), С. 176 - 179
Опубликована: Окт. 8, 2024
This
research
paper
presents
an
end-to-
end
implementation
of
a
chatbot
system
tailored
for
the
retail
industry,
utilizing
large
language
model
(LLM).
The
is
designed
to
assist
employees
stores,
such
as
clothing
outlets,
by
providing
real-time
access
critical
business
data,
including
inventory
levels,
sales
metrics,
and
profit
margins.
solution
aims
streamline
decision-
making
processes,
enhance
operational
efficiency,
improve
information
accessibility
reducing
dependency
on
manual
data
retrieval.
approach
leverages
advanced
natural
processing
simplify
interface
between
systems
employees,
ensuring
accurate
timely
responses
queries.
BACKGROUND
AI-powered
chatbots,
using
Large
Language
Models,
may
effectively
answer
questions
from
patients
with
hypertension,
providing
responses
that
are
accurate,
empathetic,
and
easy
to
read.
OBJECTIVE
This
study
evaluates
the
performance
of
three
such
chatbots
in
delivering
quality
responses.
METHODS
One
hundred
were
randomly
selected
Reddit
forum
r/hypertension
submitted
publicly
available
(ChatGPT-3.5,
Microsoft
Copilot,
Gemini),
anonymized
as
A,
B,
C.
Two
independent
medical
professionals
assessed
accuracy
empathy
their
Likert
scales.
Additionally,
300
analyzed
WebFX
readability
tool
measure
various
indices.
RESULTS
In
total,
evaluated.
Chatbot
A
generated
most
extensive
responses,
an
average
13
sentences
per
reply,
while
B
had
shortest
replies.
C
achieved
highest
score
on
Flesch
Reading
Ease
Scale,
indicating
better
readability,
scored
lowest.
Other
metrics,
including
Flesch-Kincaid
Grade
Level,
Gunning
Fog
Score,
others,
also
showed
significant
differences
among
reflecting
variability
readability.
CONCLUSIONS
The
indicates
all
can
produce
professional
varies
significantly.
These
findings
underscore
potential
AI
patient
education.
However,
they
highlight
urgent
need
for
further
optimization
enhance
comprehensibility
outputs.
Journal of Engineering Research and Reports,
Год журнала:
2024,
Номер
26(12), С. 24 - 46
Опубликована: Ноя. 27, 2024
This
study
investigates
the
efficacy
of
synthetic
data
in
mitigating
bias
artificial
intelligence
(AI)
model
training,
focusing
on
demographic
inclusivity
and
fairness.
Using
Generative
Adversarial
Networks
(GANs),
datasets
were
generated
from
UCI
Adult
Dataset,
COMPAS
Recidivism
MIMIC-III
Clinical
Database.
Logistic
regression
models
trained
both
original
to
evaluate
fairness
metrics
predictive
accuracy.
Fairness
was
assessed
through
parity
equality
opportunity,
which
measure
balanced
prediction
rates
equitable
outcomes
across
groups.
Fidelity
diversity
evaluated
using
statistical
tests
such
as
Kolmogorov-Smirnov
(KS)
Kullback-Leibler
(KL)
divergence,
along
with
Inception
Score,
quantifies
data.
The
results
revealed
significant
improvements
for
datasets.
For
dataset,
increased
0.72
0.89,
opportunity
rose
0.65
0.83,
without
compromising
accuracy
(0.82
AUC-ROC
compared
0.83
data).
Based
findings,
this
research
recommends
employing
GANs
generating
bias-sensitive
domains
enhance
ensure
AI
models.
Furthermore,
integrating
human-in-the-loop
(HITL)
systems
is
critical
monitor
address
residual
biases
during
generation.
Standardized
validation
frameworks,
including
fidelity
tests,
should
be
adopted
transparency
consistency
applications.
These
practices
can
enable
organizations
leverage
effectively
while
maintaining
ethical
standards
development
deployment.
Kosin Medical Journal,
Год журнала:
2024,
Номер
39(4), С. 229 - 237
Опубликована: Дек. 6, 2024
The
integration
of
artificial
intelligence
(AI)
technologies
into
medical
research
introduces
significant
ethical
challenges
that
necessitate
the
strengthening
frameworks.
This
review
highlights
issues
privacy,
bias,
accountability,
informed
consent,
and
regulatory
compliance
as
central
concerns.
AI
systems,
particularly
in
research,
may
compromise
patient
data
perpetuate
biases
if
they
are
trained
on
nondiverse
datasets,
obscure
accountability
owing
to
their
“black
box”
nature.
Furthermore,
complexity
role
affect
patients’
not
fully
grasp
extent
involvement
care.
Compliance
with
regulations
such
Health
Insurance
Portability
Accountability
Act
General
Data
Protection
Regulation
is
essential,
address
liability
cases
errors.
advocates
a
balanced
approach
autonomy
clinical
decisions,
rigorous
validation
ongoing
monitoring,
robust
governance.
Engaging
diverse
stakeholders
crucial
for
aligning
development
norms
addressing
practical
needs.
Ultimately,
proactive
management
AI’s
implications
vital
ensure
its
healthcare
improves
outcomes
without
compromising
integrity.