Journal of Transcultural Communication,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Sept. 16, 2024
Abstract
This
paper
delves
into
the
intricate
relationship
between
Large
Language
Models
(LLMs)
and
cultural
bias.
It
underscores
significant
impact
LLMs
can
have
on
shaping
a
more
equitable
culturally
sensitive
digital
landscape,
while
also
addressing
challenges
that
arise
when
integrating
these
powerful
AI
tools.
The
emphasizes
immense
significance
of
in
contemporary
research
applications,
underpinning
many
systems
algorithms.
However,
their
potential
role
perpetuating
or
mitigating
bias
remains
pressing
issue
warranting
extensive
analysis.
Cultural
stems
from
various
intertwined
factors;
following
analysis
categorizes
three
dimensions:
data
quality,
algorithm
design,
user
interaction
dynamics.
Furthermore,
impacts
identity
linguistic
diversity
are
scrutinized,
highlighting
interplay
technology
culture.
advocates
responsible
development,
outlining
mitigation
strategies
such
as
ethical
guidelines,
diverse
training
data,
feedback
mechanisms,
transparency
measures.
In
conclusion,
is
not
solely
problem
but
presents
an
opportunity.
enhance
our
awareness
critical
understanding
own
biases
fostering
curiosity
respect
for
perspectives.
We
explore
the
capabilities
of
an
augmented
democracy
system
built
on
off-the-shelf
LLMs
fine-tuned
data
summarizing
individual
preferences
across
67
policy
proposals
collected
during
2022
Brazilian
presidential
election.We
use
a
train-test
cross-validation
setup
to
estimate
accuracy
with
which
predict
both:
subject's
political
choices
and
aggregate
full
sample
participants.At
level,
out
predictions
lie
in
range
69%-76%
are
significantly
better
at
predicting
liberal
college
educated
population
we
using
adaptation
Borda
score
compare
ranking
obtained
from
probabilistic
participants
LLMs.We
find
that
predicts
than
samples
alone
when
these
represent
less
30%
40%
total
population.These
results
indicate
potentially
useful
for
construction
systems
democracy.
PLOS Digital Health,
Journal Year:
2024,
Volume and Issue:
3(11), P. e0000651 - e0000651
Published: Nov. 7, 2024
Biases
in
medical
artificial
intelligence
(AI)
arise
and
compound
throughout
the
AI
lifecycle.
These
biases
can
have
significant
clinical
consequences,
especially
applications
that
involve
decision-making.
Left
unaddressed,
biased
lead
to
substandard
decisions
perpetuation
exacerbation
of
longstanding
healthcare
disparities.
We
discuss
potential
at
different
stages
development
pipeline
how
they
affect
algorithms
Bias
occur
data
features
labels,
model
evaluation,
deployment,
publication.
Insufficient
sample
sizes
for
certain
patient
groups
result
suboptimal
performance,
algorithm
underestimation,
clinically
unmeaningful
predictions.
Missing
findings
also
produce
behavior,
including
capturable
but
nonrandomly
missing
data,
such
as
diagnosis
codes,
is
not
usually
or
easily
captured,
social
determinants
health.
Expertly
annotated
labels
used
train
supervised
learning
models
may
reflect
implicit
cognitive
care
practices.
Overreliance
on
performance
metrics
during
obscure
bias
diminish
a
model's
utility.
When
applied
outside
training
cohort,
deteriorate
from
previous
validation
do
so
differentially
across
subgroups.
How
end
users
interact
with
deployed
solutions
introduce
bias.
Finally,
where
are
developed
published,
by
whom,
impacts
trajectories
priorities
future
development.
Solutions
mitigate
must
be
implemented
care,
which
include
collection
large
diverse
sets,
statistical
debiasing
methods,
thorough
emphasis
interpretability,
standardized
reporting
transparency
requirements.
Prior
real-world
implementation
settings,
rigorous
through
trials
critical
demonstrate
unbiased
application.
Addressing
crucial
ensuring
all
patients
benefit
equitably
AI.
Short
videos
on
social
media
are
the
dominant
way
young
people
consume
content.
News
outlets
aim
to
reach
audiences
through
news
reels—short
conveying
news—but
struggle
translate
traditional
journalistic
formats
into
short,
entertaining
videos.
To
reels,
we
support
journalists
in
reframing
narrative.
In
literature,
narrative
framing
is
a
high-level
structure
that
shapes
overall
presentation
of
story.
We
identified
three
framings
for
reels
adapt
norms
but
preserve
value,
each
with
different
balance
information
and
entertainment.
introduce
ReelFramer,
human-AI
co-creative
system
helps
print
articles
scripts
storyboards.
ReelFramer
supports
exploring
multiple
find
one
appropriate
AI
suggests
foundational
details,
including
characters,
plot,
setting,
key
information.
also
visual
framing;
character
detail
designs
before
generating
full
storyboard.
Our
studies
show
introduces
necessary
diversity
various
establishing
details
generate
more
relevant
coherent.
discuss
benefits
using
content
retargeting.
European Journal of Physics,
Journal Year:
2023,
Volume and Issue:
45(2), P. 025701 - 025701
Published: Dec. 11, 2023
Abstract
The
paper
aims
to
fulfil
three
main
functions:
(1)
serve
as
an
introduction
for
the
physics
education
community
functioning
of
large
language
models
(LLMs),
(2)
present
a
series
illustrative
examples
demonstrating
how
prompt-engineering
techniques
can
impact
LLMs
performance
on
conceptual
tasks
and
(3)
discuss
potential
implications
understanding
prompt
engineering
teaching
learning.
We
first
summarise
existing
research
popular
LLM-based
chatbot
(ChatGPT)
tasks.
then
give
basic
account
work,
illustrate
essential
features
their
functioning,
strengths
limitations.
Equipped
with
this
knowledge,
we
some
challenges
generating
useful
output
ChatGPT-4
in
context
introductory
physics,
paying
special
attention
questions
problems.
provide
condensed
overview
relevant
literature
demonstrate
through
selected
be
employed
improve
’s
Qualitatively
studying
these
provides
additional
insights
into
ChatGPT’s
its
utility
problem-solving.
Finally,
consider
from
inform
use
learning
physics.
Future Internet,
Journal Year:
2023,
Volume and Issue:
15(10), P. 336 - 336
Published: Oct. 13, 2023
Historically,
mastery
of
writing
was
deemed
essential
to
human
progress.
However,
recent
advances
in
generative
AI
have
marked
an
inflection
point
this
narrative,
including
for
scientific
writing.
This
article
provides
a
comprehensive
analysis
the
capabilities
and
limitations
six
chatbots
scholarly
humanities
archaeology.
The
methodology
based
on
tagging
AI-generated
content
quantitative
accuracy
qualitative
precision
by
experts.
Quantitative
assessed
factual
correctness
manner
similar
grading
students,
while
gauged
contribution
reviewing
article.
In
test,
ChatGPT-4
scored
near
passing
grade
(−5)
whereas
ChatGPT-3.5
(−18),
Bing
(−21)
Bard
(−31)
were
not
far
behind.
Claude
2
(−75)
Aria
(−80)
much
lower.
all
chatbots,
but
especially
ChatGPT-4,
demonstrated
proficiency
recombining
existing
knowledge,
failed
generate
original
content.
As
side
note,
our
results
suggest
that
with
size
large
language
models
has
reached
plateau.
Furthermore,
paper
underscores
intricate
recursive
nature
research.
process
transforming
raw
data
into
refined
knowledge
is
computationally
irreducible,
highlighting
challenges
face
emulating
originality
Our
apply
state
affairs
third
quarter
2023.
conclusion,
revolutionised
generation,
their
ability
produce
contributions
remains
limited.
We
expect
change
future
as
current
model-based
evolve
model-powered
software.
2022 ACM Conference on Fairness, Accountability, and Transparency,
Journal Year:
2024,
Volume and Issue:
67, P. 2454 - 2469
Published: June 3, 2024
Large
language
models
(LLMs)
are
increasingly
capable
of
providing
users
with
advice
in
a
wide
range
professional
domains,
including
legal
advice.
However,
relying
on
LLMs
for
queries
raises
concerns
due
to
the
significant
expertise
required
and
potential
real-world
consequences
To
explore
when
why
should
or
not
provide
users,
we
conducted
workshops
20
experts
using
methods
inspired
by
case-based
reasoning.
The
provided
realistic
("cases")
allowed
examine
granular,
situation-specific
overarching
technical
constraints,
producing
concrete
set
contextual
considerations
LLM
developers.
By
synthesizing
factors
that
impacted
response
appropriateness,
present
4-dimension
framework:
(1)
User
attributes
behaviors,
(2)
Nature
queries,
(3)
AI
capabilities,
(4)
Social
impacts.
We
share
experts'
recommendations
strategies,
which
center
around
helping
identify
'right
questions
ask'
relevant
information
rather
than
definitive
judgments.
Our
findings
reveal
novel
considerations,
such
as
unauthorized
practice
law,
confidentiality,
liability
inaccurate
advice,
have
been
overlooked
literature.
deliberation
method
enabled
us
elicit
fine-grained,
practice-informed
insights
surpass
those
from
de-contextualized
surveys
speculative
principles.
These
underscore
applicability
our
translating
domain-specific
knowledge
practices
into
policies
can
guide
behavior
more
responsible
direction.
Journal of Cancer Research and Clinical Oncology,
Journal Year:
2024,
Volume and Issue:
150(3)
Published: March 19, 2024
Despite
advanced
technologies
in
breast
cancer
management,
challenges
remain
efficiently
interpreting
vast
clinical
data
for
patient-specific
insights.
We
reviewed
the
literature
on
how
large
language
models
(LLMs)
such
as
ChatGPT
might
offer
solutions
this
field.