TUN-GCA: A Novel Approach for Organ Segmentation in Nasopharyngeal Carcinoma CT Images
W.Q. Che,
No information about this author
Ziyuan Ye,
No information about this author
Rongting Huang
No information about this author
et al.
Communications in computer and information science,
Journal Year:
2025,
Volume and Issue:
unknown, P. 368 - 381
Published: Jan. 1, 2025
Language: Английский
The impact of different task contexts on emergency responders’ trust and usage intention of artificial intelligence
Ergonomics,
Journal Year:
2025,
Volume and Issue:
unknown, P. 1 - 15
Published: May 15, 2025
Proper
use
of
artificial
intelligence
(AI)
can
significantly
enhance
emergency
responders'
performance.
However,
they
do
not
always
trust
or
appropriately
AI.
This
study
examined
in
AI
and
usage
intention
under
different
rescue
pressures
uncertainty
from
the
perspective
perceived
capability.
was
conducted
two
phases:
first,
questionnaire
data
were
collected
99
firefighters;
second,
semi-structured
interviews
with
12
participants.
Results
revealed
that
pressure
affected
capability,
whereas
influenced
self-capability.
Rescue
which
subsequently
impacted
trust,
ultimately,
intention.
These
findings
explain
process
through
impacts
willingness
to
also
explores
psychological
mechanisms
provides
valuable
recommendations
for
designers
develop
systems
suitable
responders.
Language: Английский
Large language models (LLMs) as research Subjects: Status, opportunities and challenges
Chenguang Zhao,
No information about this author
Meirewuti Habule,
No information about this author
Wei Zhang
No information about this author
et al.
New Ideas in Psychology,
Journal Year:
2025,
Volume and Issue:
79, P. 101167 - 101167
Published: May 24, 2025
Language: Английский
Bias Mitigation in Primary Healthcare Artificial Intelligence Models: A Scoping Review (Preprint)
Journal of Medical Internet Research,
Journal Year:
2024,
Volume and Issue:
27, P. e60269 - e60269
Published: Nov. 7, 2024
Background
Artificial
intelligence
(AI)
predictive
models
in
primary
health
care
have
the
potential
to
enhance
population
by
rapidly
and
accurately
identifying
individuals
who
should
receive
services.
However,
these
also
carry
risk
of
perpetuating
or
amplifying
existing
biases
toward
diverse
groups.
We
identified
a
gap
current
understanding
strategies
used
assess
mitigate
bias
algorithms
related
individuals’
personal
protected
attributes.
Objective
This
study
aimed
describe
attempts,
strategies,
methods
AI
within
care,
identify
groups
attributes
considered,
evaluate
results
approaches
on
both
reduction
model
performance.
Methods
conducted
scoping
review
following
Joanna
Briggs
Institute
(JBI)
guidelines,
searching
Medline
(Ovid),
CINAHL
(EBSCO),
PsycINFO
Web
Science
databases
for
studies
published
between
January
1,
2017,
November
15,
2022.
Pairs
reviewers
independently
screened
titles
abstracts,
applied
selection
criteria,
performed
full-text
screening.
Discrepancies
regarding
inclusion
were
resolved
consensus.
Following
reporting
standards
we
extracted
data
objectives,
features,
targeted
groups,
mitigation
used,
results.
Using
mixed
appraisal
tool,
appraised
quality
studies.
Results
After
removing
585
duplicates,
1018
abstracts.
From
remaining
189
articles,
included
17
The
most
frequently
investigated
race
(or
ethnicity),
examined
12
studies,
sex
(often
as
gender),
typically
classified
“male
versus
female”
10
categorized
into
four
clusters:
(1)
modifying
datasets,
(2)
sourcing
from
electronic
records,
(3)
developing
tools
with
“human-in-the-loop”
approach,
(4)
ethical
principles
informed
decision-making.
Algorithmic
preprocessing
methods,
such
relabeling
reweighing
data,
along
natural
language
processing
techniques
that
extract
unstructured
notes,
showed
greatest
mitigation.
Other
at
enhancing
fairness
group
recalibration
application
equalized
odds
metric.
sometimes
exacerbated
prediction
errors
across
led
overall
miscalibrations.
Conclusions
suggest
are
more
easily
mitigated
when
open-sourced,
multiple
stakeholders
engaged,
during
algorithm’s
stage.
Further
empirical
include
broader
range
Indigenous
peoples
Canada,
needed
validate
expand
upon
findings.
Trial
Registration
OSF
Registry
osf.io/9ngz5/;
https://osf.io/9ngz5/
International
Registered
Report
Identifier
(IRRID)
RR2-10.2196/46684
Language: Английский
Bias Mitigation in Primary Healthcare Artificial Intelligence Models: Scoping Review (Preprint)
Published: May 6, 2024
BACKGROUND
Artificial
intelligence
(AI)
predictive
models
in
primary
healthcare
can
potentially
lead
to
benefits
for
population
health.
Algorithms
identify
more
rapidly
and
accurately
who
should
receive
care
health
services,
but
they
could
also
perpetuate
or
exacerbate
existing
biases
toward
diverse
groups.
We
noticed
a
gap
actual
knowledge
about
which
strategies
are
deployed
assess
mitigate
bias
groups,
based
on
their
personal
protected
attributes,
algorithms.
OBJECTIVE
To
describe
attempts,
strategies,
methods
used
artificial
models.
groups
attributes
have
been
considered.
evaluate
the
results
attenuation
AI
performance
of
these
methods.
METHODS
conducted
scoping
review
informed
by
Joanna
Briggs
Institute
(JBI)
recommendations.
An
experienced
librarian
developed
search
strategy
four
databases
(Medline
(OVID),
CINAHL
(EBSCO),
PsycInfo
Web
Science)
sources
published
between
2017-01-01
2022-11-15.
imported
data
Covidence
pairs
reviewers
independently
screened
titles
abstracts,
applied
selection
criteria,
performed
full-text
screening.
Any
discrepancies
regarding
inclusion
studies
were
resolved
through
consensus.
Based
reporting
standards
care,
we
extraction
-
study
objectives,
models’
main
features,
concerned,
mitigation
deployed,
results.
Using
Mixed-Methods
Appraisal
Tool
(MMAT),
appraised
quality
studies.
RESULTS
After
removing
585
duplicates,
1018
abstracts.
From
remaining
189
after
exclusion,
excluded
172
full
texts
included
17
The
most
investigated
Race
(or
Ethnicity)
(12/17),
Sex
(mostly
identified
as
Gender
studies),
using
binary
“male
vs
female”
(10/17)
grouped
according
attempts
into
following
categories:
1)
datasets,
2)
sourcing
such
Electronic
Health
Records,
3)
developing
tools
with
“human-in-the-loop”
4)
identifying
ethical
principles
decision-making.
Mathematical
algorithmic
preprocessing
methods,
changing
labeling
reweighing,
along
natural
language
processing
method
from
unstructured
notes,
showed
greatest
potential.
Other
enhance
model
fairness
include
group
recalibration
application
equalized
odds
metric,
either
exacerbated
predictions
errors
resulted
overall
miscalibrations.
CONCLUSIONS
Results
suggests
that
be
easily
mitigated
when
open-sourced,
multiple
stakeholders
involved,
during
algorithm’
stage.
Further
empirical
studies,
considering
nonbinary
gender
identities
Indigenous
peoples
Canada,
needed
confirm
expand
this
knowledge.
CLINICALTRIAL
OSF
Registries
qbph8;
https://osf.io/qbph8
INTERNATIONAL
REGISTERED
REPORT
RR2-10.2196/46684
Language: Английский
Chatsos: Vector Database Augmented Generative Question Answering Assistant in Safety Engineering
Haiyang Tang,
No information about this author
Dongping Chen,
No information about this author
Qingzhao Chu
No information about this author
et al.
Published: Jan. 1, 2024
Language: Английский
Human-like object concept representations emerge naturally in multimodal large language models
Research Square (Research Square),
Journal Year:
2024,
Volume and Issue:
unknown
Published: Aug. 13, 2024
Abstract
The
conceptualization
and
categorization
of
natural
objects
in
the
human
mind
have
long
intrigued
cognitive
scientists
neuroscientists,
offering
crucial
insights
into
perception
cognition.
Recently,
rapid
development
Large
Language
Models
(LLMs)
has
raised
attractive
question
whether
these
models
can
also
develop
human-like
object
representations
through
exposure
to
vast
amounts
linguistic
multimodal
data.
In
this
study,
we
combined
behavioral
neuroimaging
analysis
methods
uncover
how
concept
LLMs
correlate
with
those
humans.
By
collecting
large-scale
datasets
4.7
million
triplet
judgments
from
LLM
Multimodal
(MLLM),
were
able
derive
low-dimensional
embeddings
that
capture
underlying
similarity
structure
1,854
objects.
resulting
66-dimensional
found
be
highly
stable
predictive,
exhibited
semantic
clustering
akin
mental
representations.
Interestingly,
interpretability
dimensions
suggests
MLLM
developed
conceptual
Further
demonstrated
strong
alignment
between
identified
model
neural
activity
patterns
many
functionally
defined
brain
ROIs
(e.g.,
EBA,
PPA,
RSC
FFA).
This
provides
compelling
evidence
LLMs,
while
not
identical
human,
share
fundamental
commonalities
reflect
key
schemas
knowledge.
study
advances
our
understanding
machine
intelligence
informs
more
artificial
systems.
Language: Английский
Learning Chain of Counterfactual Thought for Bias-Robust Vision-Language Reasoning
Lecture notes in computer science,
Journal Year:
2024,
Volume and Issue:
unknown, P. 334 - 351
Published: Oct. 28, 2024
Language: Английский