Artificial
Intelligence,
Healthcare,
Ethics,
Responsible
AI,
Diagnostic
Treatment
Planning,
Patient
Care,
Governance
Frameworks,
Machine
Learning,
Data
Privacy,
Safety,
Predictive
Analysis,
Decision
Support
Systems,
Future
of
AI
in
Healthcare.
MethodsX,
Journal Year:
2025,
Volume and Issue:
14, P. 103338 - 103338
Published: April 25, 2025
Classification
and
segmentation
play
a
pivotal
role
in
transforming
decision-making
processes
healthcare,
IoT,
edge
computing.
However,
existing
methodologies
often
struggle
with
accuracy,
precision,
specificity
when
applied
to
large,
heterogeneous
datasets,
particularly
minimizing
false
positives
negatives.
To
address
these
challenges,
we
propose
robust
hybrid
framework
comprising
three
key
phases:
feature
extraction
using
Hybrid
Deep
Belief
Network
(HDBN),
dynamic
prediction
aggregation
via
Custom
Adaptive
Ensemble
(CAEN),
an
optimization
mechanism
ensuring
adaptability
robustness.
Extensive
evaluations
on
four
diverse
datasets
demonstrate
the
framework's
superior
performance,
achieving
93
%
87
95
specificity,
91
recall.
Advanced
metrics,
including
Matthews
Correlation
Coefficient
of
0.8932,
validate
its
reliability.
The
proposed
establishes
new
benchmark
for
scalable,
high-performance
classification
segmentation,
offering
solutions
real-world
applications
paving
way
future
integration
explainable
AI
real-time
systems.•Designed
novel
integrating
HDBN
CAEN
adaptive
prediction.•Proposed
strategies
enhancing
robustness
across
data
scenarios.
IGI Global eBooks,
Journal Year:
2025,
Volume and Issue:
unknown, P. 59 - 80
Published: April 30, 2025
Coronary
Heart
Disease
(CHD)
continues
to
affect
close
145
million
males
and
110
females
across
the
globe,
taking
nine
lives
each
year.
A
new
evolving
framework
incorporating
IoT-edge
computing,
Explainable
AI,
blockchain
technology
is
in
process
of
development
order
create
a
secure,
privacy-preserving,
interpretable
automated
machine
learning
environment
predict
chronic
diseases.
The
IoT-based
system
utilizes
medical
sensors
assistive
devices
for
real-time
monitoring
patient.
patient
information
transmitted
securely
using
architecture
so
that
various
health
practitioners
have
access
it
with
guarantee
data
integrity,
confidentiality,
control.
use
AI
(XAI)
enables
predictions
clinicians
fosters
confidence
transparency.
Physicians
are
able
make
evidence-based
decisions
beyond
traditional
methods
since
XAI
provides
reasons
predictions.
Artificial
intelligence
models
encounter
significant
challenges
due
to
their
black-box
nature,
particularly
in
safety-critical
domains
such
as
healthcare,
finance,
autonomous
vehicles,
and
justice.
Explainable
Intelligence
(XAI)
addresses
these
by
providing
explanations
for
how
make
decisions
predictions,
ensuring
transparency,
accountability,
fairness.
Existing
studies
have
examined
the
fundamental
concepts
of
XAI,
its
general
principles,
scope
XAI
techniques.
However,
there
remains
a
gap
literature
are
no
comprehensive
reviews
that
delve
into
detailed
mathematical
representations,
design
methodologies
models,
other
associated
aspects.
This
paper
provides
review
encompassing
common
terminologies
definitions,
need
beneficiaries
taxonomy
methods,
application
methods
different
areas.
The
survey
is
aimed
at
researchers,
practitioners,
AI
model
developers,
who
interested
enhancing
trustworthiness,
fairness
models.
Advances in medical technologies and clinical practice book series,
Journal Year:
2024,
Volume and Issue:
unknown, P. 147 - 159
Published: June 28, 2024
Artificial
intelligence
(AI)
and
system
mastering
(ML)
have
received
a
good-sized
interest
in
Alzheimer's
studies
due
to
their
capability
enhance
prognosis
treatment.
But
comprehensive
know-how
of
these
technologies
software
remains
lacking.
This
review
objectives
resolve
the
essentials
AI
ML
studies,
highlighting
capacity
effect
on
sickness
development
control.
The
results
outline
modern-day
nation
use
research
challenges
implementation,
providing
foundation
for
additional
improvements
this
subject.
field
has
been
greatly
impacted
by
way
fast
improvement
artificial
studying
techniques.
With
growing
quantity
records
being
generated
discipline
need
more
accurate
predictions
remedies,
come
be
crucial
gear
unraveling
complexities
disease.
BioMedInformatics,
Journal Year:
2024,
Volume and Issue:
4(4), P. 2338 - 2373
Published: Dec. 13, 2024
Background:
Breast
cancer
is
one
of
the
leading
causes
death
in
women,
making
early
detection
through
mammography
crucial
for
improving
survival
rates.
However,
human
interpretation
mammograms
often
prone
to
diagnostic
errors.
This
study
addresses
challenge
accuracy
breast
by
leveraging
advanced
machine
learning
techniques.
Methods:
We
propose
an
extended
ensemble
deep
model
that
integrates
three
state-of-the-art
convolutional
neural
network
(CNN)
architectures:
VGG16,
DenseNet121,
and
InceptionV3.
The
utilizes
multi-scale
feature
extraction
enhance
both
benign
malignant
masses
mammograms.
approach
evaluated
on
two
benchmark
datasets:
INbreast
CBIS-DDSM.
Results:
proposed
achieved
significant
performance
improvements.
On
dataset,
attained
90.1%,
recall
88.3%,
F1-score
89.1%.
For
CBIS-DDSM
reached
89.5%
90.2%
specificity.
method
outperformed
each
individual
CNN
model,
reducing
false
positives
negatives,
thereby
providing
more
reliable
results.
Conclusions:
demonstrated
strong
potential
as
a
decision
support
tool
radiologists,
offering
accurate
earlier
cancer.
By
complementary
strengths
multiple
architectures,
this
can
improve
clinical
accessibility
high-quality
screening.
Advances in systems analysis, software engineering, and high performance computing book series,
Journal Year:
2024,
Volume and Issue:
unknown, P. 145 - 156
Published: April 29, 2024
In
this
chapter,
the
authors
embark
on
a
journey
to
unveil
complexities
of
machine
learning
by
focusing
crucial
aspect
interpretability.
As
algorithms
become
increasingly
sophisticated
and
pervasive
across
industries,
understanding
how
these
models
make
decisions
is
essential
for
trust,
accountability,
ethical
considerations.
They
delve
into
various
techniques
methodologies
aimed
at
unraveling
black
box
learning,
shedding
light
arrive
their
predictions
classifications.
From
explainable
AI
approaches
model-agnostic
techniques,
they
explore
practical
strategies
interpreting
explaining
models.
Through
real-world
examples
case
studies,
illustrate
importance
interpretability
in
ensuring
transparency,
fairness,
compliance
decision-making
processes.
Whether
you're
data
scientist,
researcher,
or
business
leader,
chapter
serves
as
guide
navigating
complex
landscape
unlocking
true
potential
technologies.