Communications Psychology,
Journal Year:
2024,
Volume and Issue:
2(1)
Published: June 3, 2024
In
the
present
study,
we
investigate
and
compare
reasoning
in
large
language
models
(LLMs)
humans,
using
a
selection
of
cognitive
psychology
tools
traditionally
dedicated
to
study
(bounded)
rationality.
We
presented
human
participants
an
array
pretrained
LLMs
new
variants
classical
experiments,
cross-compared
their
performances.
Our
results
showed
that
most
included
errors
akin
those
frequently
ascribed
error-prone,
heuristic-based
reasoning.
Notwithstanding
this
superficial
similarity,
in-depth
comparison
between
humans
indicated
important
differences
with
human-like
reasoning,
models'
limitations
disappearing
almost
entirely
more
recent
LLMs'
releases.
Moreover,
show
while
it
is
possible
devise
strategies
induce
better
performance,
machines
are
not
equally
responsive
same
prompting
schemes.
conclude
by
discussing
epistemological
implications
challenges
comparing
machine
behavior
for
both
artificial
intelligence
psychology.
Digital Health,
Journal Year:
2023,
Volume and Issue:
9
Published: Jan. 1, 2023
The
utilization
of
artificial
intelligence
(AI)
in
clinical
practice
has
increased
and
is
evidently
contributing
to
improved
diagnostic
accuracy,
optimized
treatment
planning,
patient
outcomes.
rapid
evolution
AI,
especially
generative
AI
large
language
models
(LLMs),
have
reignited
the
discussions
about
their
potential
impact
on
healthcare
industry,
particularly
regarding
role
providers.
Concerning
questions,
“can
replace
doctors?”
“will
doctors
who
are
using
those
not
it?”
been
echoed.
To
shed
light
this
debate,
article
focuses
emphasizing
augmentative
healthcare,
underlining
that
aimed
complement,
rather
than
replace,
fundamental
solution
emerges
with
human–AI
collaboration,
which
combines
cognitive
strengths
providers
analytical
capabilities
AI.
A
human-in-the-loop
(HITL)
approach
ensures
systems
guided,
communicated,
supervised
by
human
expertise,
thereby
maintaining
safety
quality
services.
Finally,
adoption
can
be
forged
further
organizational
process
informed
HITL
improve
multidisciplinary
teams
loop.
create
a
paradigm
shift
complementing
enhancing
skills
providers,
ultimately
leading
service
quality,
outcomes,
more
efficient
system.
Proceedings of the National Academy of Sciences,
Journal Year:
2023,
Volume and Issue:
120(51)
Published: Dec. 12, 2023
As
large
language
models
(LLMs)
like
GPT
become
increasingly
prevalent,
it
is
essential
that
we
assess
their
capabilities
beyond
processing.
This
paper
examines
the
economic
rationality
of
by
instructing
to
make
budgetary
decisions
in
four
domains:
risk,
time,
social,
and
food
preferences.
We
measure
assessing
consistency
GPT’s
with
utility
maximization
classic
revealed
preference
theory.
find
are
largely
rational
each
domain
demonstrate
higher
score
than
those
human
subjects
a
parallel
experiment
literature.
Moreover,
estimated
parameters
slightly
different
from
exhibit
lower
degree
heterogeneity.
also
scores
robust
randomness
demographic
settings
such
as
age
gender
but
sensitive
contexts
based
on
frames
choice
situations.
These
results
suggest
potential
LLMs
good
need
further
understand
capabilities,
limitations,
underlying
mechanisms.
Research Square (Research Square),
Journal Year:
2023,
Volume and Issue:
unknown
Published: Aug. 28, 2023
Abstract
The
advent
of
large
language
models
(LLMs)
has
revolutionized
natural
processing,
enabling
the
generation
coherent
and
contextually
relevant
text.
As
LLMs
increasingly
power
conversational
agents,
synthetic
personality
embedded
in
these
models,
by
virtue
training
on
amounts
human
data,
is
becoming
important.
Since
a
key
factor
determining
effectiveness
communication,
we
present
comprehensive
method
for
administering
validating
tests
widely-used
LLMs,
as
well
shaping
generated
text
such
LLMs.
Applying
this
method,
found:
1)
measurements
outputs
some
under
specific
prompting
configurations
are
reliable
valid;
2)
evidence
reliability
validity
LLM
stronger
larger
instruction
fine-tuned
models;
3)
can
be
shaped
along
desired
dimensions
to
mimic
profiles.
We
discuss
application
ethical
implications
measurement
particular
regarding
responsible
use
Proceedings of the National Academy of Sciences,
Journal Year:
2024,
Volume and Issue:
121(21)
Published: May 9, 2024
Generative
AI
that
can
produce
realistic
text,
images,
and
other
human-like
outputs
is
currently
transforming
many
different
industries.
Yet
it
not
yet
known
how
such
tools
might
influence
social
science
research.
I
argue
has
the
potential
to
improve
survey
research,
online
experiments,
automated
content
analyses,
agent-based
models,
techniques
commonly
used
study
human
behavior.
In
second
section
of
this
article,
discuss
limitations
AI.
examine
bias
in
data
train
these
negatively
impact
research—as
well
as
a
range
challenges
related
ethics,
replication,
environmental
impact,
proliferation
low-quality
conclude
by
arguing
scientists
address
creating
open-source
infrastructure
for
research
on
Such
only
necessary
ensure
broad
access
high-quality
tools,
argue,
but
also
because
progress
will
require
deeper
understanding
forces
guide
npj Digital Medicine,
Journal Year:
2024,
Volume and Issue:
7(1)
Published: Feb. 19, 2024
Large
pre-trained
language
models
(LLMs)
have
been
shown
to
significant
potential
in
few-shot
learning
across
various
fields,
even
with
minimal
training
data.
However,
their
ability
generalize
unseen
tasks
more
complex
such
as
biology,
has
yet
be
fully
evaluated.
LLMs
can
offer
a
promising
alternative
approach
for
biological
inference,
particularly
cases
where
structured
data
and
sample
size
are
limited,
by
extracting
prior
knowledge
from
text
corpora.
Our
proposed
uses
predict
the
synergy
of
drug
pairs
rare
tissues
that
lack
features.
experiments,
which
involved
seven
different
cancer
types,
demonstrated
LLM-based
prediction
model
achieved
accuracy
very
few
or
zero
samples.
model,
CancerGPT
(with
~
124M
parameters),
was
comparable
larger
fine-tuned
GPT-3
175B
parameters).
research
is
first
tackle
pair
limited
We
also
utilize
an
reaction
tasks.
Big Data and Cognitive Computing,
Journal Year:
2023,
Volume and Issue:
7(3), P. 124 - 124
Published: June 27, 2023
Large
Language
Models
(LLMs)
are
becoming
increasingly
integrated
into
our
lives.
Hence,
it
is
important
to
understand
the
biases
present
in
their
outputs
order
avoid
perpetuating
harmful
stereotypes,
which
originate
own
flawed
ways
of
thinking.
This
challenge
requires
developing
new
benchmarks
and
methods
for
quantifying
affective
semantic
bias,
keeping
mind
that
LLMs
act
as
psycho-social
mirrors
reflect
views
tendencies
prevalent
society.
One
such
tendency
has
negative
effects
global
phenomenon
anxiety
toward
math
STEM
subjects.
In
this
study,
we
introduce
a
novel
application
network
science
cognitive
psychology
towards
fields
from
ChatGPT,
GPT-3,
GPT-3.5,
GPT-4.
Specifically,
use
behavioral
forma
mentis
networks
(BFMNs)
how
these
frame
disciplines
relation
other
concepts.
We
data
obtained
by
probing
three
language
generation
task
previously
been
applied
humans.
Our
findings
indicate
have
perceptions
fields,
associating
with
concepts
6
cases
out
10.
observe
significant
differences
across
OpenAI’s
models:
newer
versions
(i.e.,
GPT-4)
produce
5×
semantically
richer,
more
emotionally
polarized
fewer
associations
compared
older
N=159
high-school
students.
These
suggest
advances
architecture
may
lead
less
biased
models
could
even
perhaps
someday
aid
reducing
stereotypes
society
rather
than
them.