Biomedicines,
Journal Year:
2024,
Volume and Issue:
12(3), P. 492 - 492
Published: Feb. 22, 2024
Background:
Type
1
diabetes
(T1D)
is
a
devastating
autoimmune
disease,
and
its
rising
prevalence
in
the
United
States
around
world
presents
critical
problem
public
health.
While
some
treatment
options
exist
for
patients
already
diagnosed,
individuals
considered
at
risk
developing
T1D
who
are
still
early
stages
of
their
disease
pathogenesis
without
symptoms
have
no
any
preventive
intervention.
This
because
uncertainty
determining
level
predicting
with
high
confidence
will
progress,
or
not,
to
clinical
diagnosis.
Biomarkers
that
assess
one’s
certainty
could
address
this
inform
decisions
on
intervention,
especially
children
where
burden
justifying
high.
Single
omics
approaches
(e.g.,
genomics,
proteomics,
metabolomics,
etc.)
been
applied
identify
biomarkers
based
specific
disturbances
association
disease.
However,
reliable
remained
elusive
date.
To
overcome
this,
we
previously
showed
parallel
multi-omics
provides
more
comprehensive
picture
disease-associated
facilitates
identification
candidate
biomarkers.
Methods:
paper
evaluated
use
machine
learning
(ML)
using
data
augmentation
supervised
ML
methods
purpose
improving
salient
patterns
ultimate
extraction
novel
biomarker
candidates
integrated
datasets
from
limited
number
samples.
We
also
examined
different
integration
(early,
intermediate,
late)
which
stage
parametric
models
can
learn
under
conditions
dimensionality
variation
feature
counts
across
omics.
In
late
scheme,
employed
multi-view
ensemble
comprising
individual
trained
over
single
computational
challenges
posed
by
yet
datasets.
Results:
improves
prediction
case
vs.
control
finds
most
success
flagging
larger
consistent
set
associated
features
when
compared
chance
models,
may
eventually
be
used
downstream
identifying
composite
signature
risk.
Conclusions:
current
work
demonstrates
utility
exploring
ongoing
quest
biomarkers,
reinforcing
hope
signatures
via
ultimately
informing
face
escalating
global
incidence
debilitating
Journal of Medicine Surgery and Public Health,
Journal Year:
2024,
Volume and Issue:
3, P. 100109 - 100109
Published: April 23, 2024
Using
Artificial
intelligence
technologies
in
cardiology
has
witnessed
rapid
advancements
across
various
domains,
fostering
innovation
and
reshaping
clinical
practices.
The
study
aims
to
provide
a
comprehensive
overview
of
these
AI-driven
their
implications
for
enhancing
cardiovascular
healthcare.
A
systematic
approach
was
adopted
conduct
an
extensive
review
scholarly
articles
peer-reviewed
literature
focusing
on
the
application
AI
cardiology.
Databases
including
PubMed/MEDLINE,
ScienceDirect,
IEEE
Xplore,
Web
Science
were
systematically
searched.
Articles
screened
following
defined
selection
criteria.
These
articles'
synthesis
highlighted
AI's
diverse
applications
cardiology,
but
not
limited
diagnostic
innovations,
precision
medicine,
remote
monitoring
technologies,
drug
discovery,
decision
support
systems.
shows
significant
role
medicine
by
revolutionising
diagnostics,
treatment
strategies,
patient
care.
showcased
this
reflect
transformative
potential
technologies.
However,
challenges
such
as
algorithm
accuracy,
interoperability,
integration
into
workflows
persist.
continued
strategic
promise
deliver
more
personalised,
efficient,
effective
care,
ultimately
improving
outcomes
shaping
future
practice.
npj Digital Medicine,
Journal Year:
2024,
Volume and Issue:
7(1)
Published: Feb. 15, 2024
The
utilization
of
artificial
intelligence
(AI)
in
diabetes
care
has
focused
on
early
intervention
and
treatment
management.
Notably,
usage
expanded
to
predict
an
individual's
risk
for
developing
type
2
diabetes.
A
scoping
review
40
studies
by
Mohsen
et
al.
shows
that
while
most
used
unimodal
AI
models,
multimodal
approaches
were
superior
because
they
integrate
multiple
types
data.
However,
creating
models
determining
model
performance
are
challenging
tasks
given
the
multi-factored
nature
For
both
there
also
concerns
bias
with
lack
external
validations
representation
race,
age,
gender
training
barriers
data
quality
evaluation
standardization
ripe
areas
new
technologies,
especially
entrepreneurs
innovators.
Collaboration
amongst
providers,
entrepreneurs,
researchers
must
be
prioritized
ensure
is
providing
equitable
patient
care.
Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: Feb. 24, 2024
Abstract
Accurate
deep
learning
(DL)
models
to
predict
type
2
diabetes
(T2D)
are
concerned
not
only
with
targeting
the
discrimination
task
but
also
useful
feature
representation.
However,
existing
DL
tools
far
from
perfect
and
do
provide
appropriate
interpretation
as
a
guideline
explain
promote
superior
performance
in
target
task.
Therefore,
we
an
interpretable
approach
for
our
presented
transfer
(DTL)
overcome
such
drawbacks,
working
follows.
We
utilize
several
pre-trained
including
SEResNet152,
SEResNeXT101.
Then,
knowledge
via
keeping
weights
convolutional
base
(i.e.,
extraction
part)
while
modifying
classification
part
use
of
Adam
optimizer
deal
classifying
healthy
controls
T2D
based
on
single-cell
gene
regulatory
network
(SCGRN)
images.
Another
DTL
work
similar
manner
just
bottom
layers
unaltered
updating
consecutive
through
training
scratch.
Experimental
results
whole
224
SCGRN
images
using
five-fold
cross-validation
show
that
model
(TFeSEResNeXT101)
achieving
highest
average
balanced
accuracy
(BAC)
0.97
thereby
significantly
outperforming
baseline
resulted
BAC
0.86.
Moreover,
simulation
study
demonstrated
superiority
is
attributed
distributional
conformance
weight
parameters
obtained
when
coupled
model.
PLoS ONE,
Journal Year:
2025,
Volume and Issue:
20(1), P. e0310218 - e0310218
Published: Jan. 24, 2025
Diabetes,
a
chronic
condition
affecting
millions
worldwide,
necessitates
early
intervention
to
prevent
severe
complications.
While
accurately
predicting
diabetes
onset
or
progression
remains
challenging
due
complex
and
imbalanced
datasets,
recent
advancements
in
machine
learning
offer
potential
solutions.
Traditional
prediction
models,
often
limited
by
default
parameters,
have
been
superseded
more
sophisticated
approaches.
Leveraging
Bayesian
optimization
fine-tune
XGBoost,
researchers
can
harness
the
power
of
data
analysis
improve
predictive
accuracy.
By
identifying
key
factors
influencing
risk,
personalized
prevention
strategies
be
developed,
ultimately
enhancing
patient
outcomes.
Successful
implementation
requires
meticulous
management,
stringent
ethical
considerations,
seamless
integration
into
healthcare
systems.
This
study
focused
on
optimizing
hyperparameters
an
XGBoost
ensemble
model
using
optimization.
Compared
grid
search
(accuracy:
97.24%,
F1-score:
95.72%,
MCC:
81.02%),
with
achieved
slightly
improved
performance
97.26%,
MCC:81.18%).
Although
improvements
observed
this
are
modest,
optimized
represents
promising
step
towards
revolutionizing
treatment.
approach
holds
significant
outcomes
for
individuals
at
risk
developing
diabetes.
Cardiovascular Diabetology,
Journal Year:
2025,
Volume and Issue:
24(1)
Published: Feb. 18, 2025
Abstract
Background
Digitalization
and
big
health
system
data
open
new
avenues
for
targeted
prevention
treatment
strategies.
We
aimed
to
develop
validate
prediction
models
stroke
myocardial
infarction
(MI)
in
patients
with
type
2
diabetes
based
on
routinely
collected
high-dimensional
insurance
claims
compared
predictive
performance
of
traditional
regression
state-of-the-art
machine
learning
including
deep
methods.
Methods
used
German
from
2014
2019
287
potentially
relevant
literature-derived
variables
predict
3-year
risk
MI
stroke.
Following
a
train-test
split
approach,
we
the
logistic
methods
without
forward
selection,
LASSO-regularization,
random
forests
(RF),
gradient
boosting
(GB),
multi-layer-perceptrons
(MLP)
feature-tokenizer
transformers
(FTT).
assessed
discrimination
(Areas
Under
Precision-Recall
Receiver-Operator
Curves,
AUPRC
AUROC)
calibration.
Results
Among
n
=
371,006
(mean
age:
67.2
years),
3.5%
(
13,030)
had
MIs
3.4%
12,701)
strokes.
AUPRCs
were
0.035
0.034
(stroke)
null
model,
between
0.082
0.092
(GB)
MI,
0.061
0.073
stoke.
AUROCs
0.5
models,
0.70
(RF,
MLP,
FTT)
0.71
(all
other
models)
0.66
0.69
All
well
calibrated.
Conclusions
Discrimination
claims-based
reached
ceiling
at
around
0.09
0.7
AUROC.
While
AUROC
this
was
comparable
existing
epidemiological
incorporating
clinical
information,
comparison
other,
more
metrics,
such
as
AUPRC,
sensitivity
Positive
Predictive
Value
hampered
by
lack
reporting
literature.
The
fact
that
did
not
outperform
approaches
may
suggest
feature
richness
complexity
exploited
before
choice
algorithm
could
become
critical
maximize
performance.
Future
research
might
focus
impact
different
derivation
ceilings.
In
absence
powerful
screening
alternatives,
applying
transparent
regression-based
routine
claims,
though
certainly
imperfect,
remains
promising
scalable
low-cost
approach
population-based
cardiovascular
stratification.
Graphical
abstract
Journal of Medical Internet Research,
Journal Year:
2025,
Volume and Issue:
27, P. e56774 - e56774
Published: Feb. 25, 2025
Background
The
surge
in
artificial
intelligence
(AI)
interventions
primary
care
trials
lacks
a
study
on
reporting
quality.
Objective
This
aimed
to
systematically
evaluate
the
quality
of
both
published
randomized
controlled
(RCTs)
and
protocols
for
RCTs
that
investigated
AI
care.
Methods
PubMed,
Embase,
Cochrane
Library,
MEDLINE,
Web
Science,
CINAHL
databases
were
searched
until
November
2024.
Eligible
studies
or
full
exploring
was
assessed
using
CONSORT-AI
(Consolidated
Standards
Reporting
Trials–Artificial
Intelligence)
SPIRIT-AI
(Standard
Protocol
Items:
Recommendations
Interventional
checklists,
focusing
intervention–related
items.
Results
A
total
11,711
records
identified.
In
total,
19
21
RCT
35
included.
overall
proportion
adequately
reported
items
65%
(172/266;
95%
CI
59%-70%)
68%
(214/315;
62%-73%)
protocols,
respectively.
percentage
specific
item
ranged
from
11%
(2/19)
100%
(19/19)
10%
(2/21)
(21/21),
exhibited
similar
characteristics
trends.
They
lack
transparency
completeness,
which
can
be
summarized
three
aspects:
without
providing
adequate
information
regarding
input
data,
mentioning
methods
identifying
analyzing
performance
errors,
stating
whether
how
intervention
its
code
accessed.
Conclusions
could
improved
protocols.
helps
promote
transparent
complete
with