Journal of China Computer-Assisted Language Learning,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Oct. 31, 2024
Abstract
The
rapid
proliferation
of
ChatGPT
has
incited
debates
regarding
its
impact
on
human
writing.
Amid
concerns
about
declining
writing
standards,
this
study
investigates
the
role
in
facilitating
writing,
especially
among
language
learners.
Using
a
case
approach,
examines
experiences
Kailing,
doctoral
student,
who
integrates
throughout
their
process.
employs
activity
theory
as
lens
for
understanding
with
generative
AI
tools
and
data
analyzed
includes
semi-structured
interviews,
samples,
GPT
logs.
Results
indicate
that
Kailing
effectively
collaborates
across
various
stages
while
preserving
her
distinct
authorial
voice
agency.
This
underscores
potential
such
to
enhance
learners
without
overshadowing
individual
authenticity.
offers
critical
exploration
how
is
utilized
process
preservation
student’s
authentic
when
engaging
tool.
Scalable
and
low-cost
AI
assistance
has
the
potential
to
improve
firm
decision-making
economic
performance.
However,
running
a
business
involves
myriad
of
open-ended
problems,
making
it
difficult
know
whether
recent
advances
can
help
owners
make
better
decisions
in
real-world
markets.
In
field
experiment
with
Kenyan
entrepreneurs,
we
assessed
impact
advice
on
small
revenues
profits
by
randomizing
access
GPT-4-powered
assistant
via
WhatsApp.
While
are
unable
reject
null
hypothesis
that
there
is
no
average
treatment
effect,
find
effect
for
entrepreneurs
who
were
high
performing
at
baseline
be
0.27
standard
deviations
greater
than
low
performers.
Sub-sample
analyses
show
performers
benefited
just
over
15%
from
assistant,
whereas
did
about
8%
worse.
This
increase
performance
inequality
does
not
stem
differences
questions
posed
or
received
AI,
but
how
selected
implemented
they
received.
More
broadly,
our
findings
demonstrate
generative
already
capable
impacting—though
uneven
unexpected
ways—real,
open-ended,
unstructured
decisions.
SSRN Electronic Journal,
Journal Year:
2023,
Volume and Issue:
unknown
Published: Jan. 1, 2023
Large
language
models
(LLMs)
such
as
OpenAI's
GPT
series
have
shown
remarkable
capabilities
in
generating
fluent
and
coherent
text
various
domains.
We
compare
the
ideation
of
ChatGPT-4,
a
chatbot
based
on
state-of-the-art
LLM,
with
those
students
at
an
elite
university.
ChatGPT-4
can
generate
ideas
much
faster
cheaper
than
students,
are
average
higher
quality
(as
measured
by
purchase-intent
surveys)
exhibit
variance
quality.
More
important,
vast
majority
best
pooled
sample
generated
ChatGPT
not
students.
Providing
few
examples
highly-rated
further
increases
its
performance.
discuss
implications
these
findings
for
management
innovation.
International Journal of Production Research,
Journal Year:
2024,
Volume and Issue:
62(17), P. 6120 - 6145
Published: Jan. 31, 2024
This
research
examines
the
transformative
potential
of
artificial
intelligence
(AI)
in
general
and
Generative
AI
(GAI)
particular
supply
chain
operations
management
(SCOM).
Through
lens
resource-based
view
based
on
key
capabilities
such
as
learning,
perception,
prediction,
interaction,
adaptation,
reasoning,
we
explore
how
GAI
can
impact
13
distinct
SCOM
decision-making
areas.
These
areas
include
but
are
not
limited
to
demand
forecasting,
inventory
management,
design,
risk
management.
With
its
outcomes,
this
study
provides
a
comprehensive
understanding
GAI's
functionality
applications
context,
offering
practical
framework
for
both
practitioners
researchers.
The
proposed
systematically
identifies
where
be
applied
SCOM,
focussing
enhancement,
process
optimisation,
investment
prioritisation,
skills
development.
Managers
use
it
guidance
evaluate
their
operational
processes
identify
deliver
improved
efficiency,
accuracy,
resilience,
overall
effectiveness.
underscores
that
GAI,
with
multifaceted
applications,
open
revolutionary
substantial
implications
future
practices,
innovations,
research.
Computers and Education Artificial Intelligence,
Journal Year:
2024,
Volume and Issue:
6, P. 100225 - 100225
Published: April 18, 2024
Artificial
intelligence
technologies
are
rapidly
advancing.
As
part
of
this
development,
large
language
models
(LLMs)
increasingly
being
used
when
humans
interact
with
systems
based
on
artificial
(AI),
posing
both
new
opportunities
and
challenges.
When
interacting
LLM-based
AI
system
in
a
goal-directed
manner,
prompt
engineering
has
evolved
as
skill
formulating
precise
well-structured
instructions
to
elicit
desired
responses
or
information
from
the
LLM,
optimizing
effectiveness
interaction.
However,
research
perspectives
non-experts
using
through
how
literacy
affects
prompting
behavior
is
lacking.
This
aspect
particularly
important
considering
implications
LLMs
context
higher
education.
In
present
study,
we
address
issue,
introduce
skill-based
approach
engineering,
explicitly
consider
role
non-experts'
(students)
their
skills.
We
also
provide
qualitative
insights
into
students'
intuitive
behaviors
towards
systems.
The
results
show
that
higher-quality
skills
predict
quality
LLM
output,
suggesting
indeed
required
for
use
generative
tools.
addition,
certain
aspects
can
play
targeted
adaptation
within
We,
therefore,
argue
integration
educational
content
current
curricula
enable
hybrid
intelligent
society
which
students
effectively
tools
such
ChatGPT.
The Journal of Legal Analysis,
Journal Year:
2024,
Volume and Issue:
16(1), P. 64 - 93
Published: Jan. 1, 2024
Abstract
Do
large
language
models
(LLMs)
know
the
law?
LLMs
are
increasingly
being
used
to
augment
legal
practice,
education,
and
research,
yet
their
revolutionary
potential
is
threatened
by
presence
of
“hallucinations”—textual
output
that
not
consistent
with
facts.
We
present
first
systematic
evidence
these
hallucinations
in
public-facing
LLMs,
documenting
trends
across
jurisdictions,
courts,
time
periods,
cases.
Using
OpenAI’s
ChatGPT
4
other
public
models,
we
show
hallucinate
at
least
58%
time,
struggle
predict
own
hallucinations,
often
uncritically
accept
users’
incorrect
assumptions.
conclude
cautioning
against
rapid
unsupervised
integration
popular
into
tasks,
develop
a
typology
guide
future
research
this
area.
In
our
era
of
rapid
technological
advancement,
the
research
landscape
for
writing
assistants
has
become
increasingly
fragmented
across
various
communities.
We
seek
to
address
this
challenge
by
proposing
a
design
space
as
structured
way
examine
and
explore
multidimensional
intelligent
interactive
assistants.
Through
large
community
collaboration,
we
five
aspects
assistants:
task,
user,
technology,
interaction,
ecosystem.
Within
each
aspect,
define
dimensions
(i.e.,
fundamental
components
an
aspect)
codes
potential
options
dimension)
systematically
reviewing
115
papers.
Our
aims
offer
researchers
designers
practical
tool
navigate,
comprehend,
compare
possibilities
assistants,
aid
in
envisioning
new
Clinical Chemistry and Laboratory Medicine (CCLM),
Journal Year:
2023,
Volume and Issue:
62(5), P. 835 - 843
Published: Nov. 29, 2023
In
the
rapid
evolving
landscape
of
artificial
intelligence
(AI),
scientific
publishing
is
experiencing
significant
transformations.
AI
tools,
while
offering
unparalleled
efficiencies
in
paper
drafting
and
peer
review,
also
introduce
notable
ethical
concerns.
AI and Ethics,
Journal Year:
2024,
Volume and Issue:
4(3), P. 791 - 804
Published: Feb. 23, 2024
Abstract
This
paper
examines
the
ethical
obligations
companies
have
when
implementing
generative
Artificial
Intelligence
(AI).
We
point
to
potential
cyber
security
risks
are
exposed
rushing
adopt
AI
solutions
or
buying
into
“AI
hype”.
While
benefits
of
for
business
been
widely
touted,
inherent
associated
less
well
publicised.
There
growing
concerns
that
race
integrate
is
not
being
accompanied
by
adequate
safety
measures.
The
rush
buy
hype
and
fall
behind
competition
potentially
exposing
broad
possibly
catastrophic
cyber-attacks
breaches.
In
this
paper,
we
outline
significant
threats
models
pose,
including
‘backdoors’
in
could
compromise
user
data
risk
‘poisoned’
producing
false
results.
light
these
concerns,
discuss
moral
considering
principles
beneficence,
non-maleficence,
autonomy,
justice,
explicability.
identify
two
examples
concern,
overreliance
over-trust
AI,
both
which
can
negatively
influence
decisions,
leaving
vulnerable
threats.
concludes
recommending
a
set
checklists
implementation
environment
minimise
based
on
discussed
responsibilities
concern.
Journal of Creativity,
Journal Year:
2024,
Volume and Issue:
34(2), P. 100079 - 100079
Published: Feb. 5, 2024
The
release
of
ChatGPT
has
sparked
quite
a
bit
interest
about
creativity
in
the
context
artificial
intelligence
(AI),
with
theorizing
and
empirical
research
asking
questions
nature
(both
human
artificially-produced)
valuing
work
produced
by
humans
means.
In
this
article,
we
discuss
one
specific
scenario
identified
community
–
co-creation,
or
use
AI
as
tool
that
could
augment
creativity.
We
present
emerging
relevant
to
how
can
be
used
on
continuum
four
levels
creativity,
from
mini-c/creativity
learning
little-c/everyday
Pro-C/professional
Big-C/eminent
discussion,
is
defined
broadly,
not
include
only
large
language
models
(e.g.,
ChatGPT)
which
might
approach
general
AI,
but
also
other
computer
programs
perform
tasks
typically
understood
requiring
intelligence.
conclude
considering
future
directions
for
across
c's.
Generative
AI
is
expected
to
have
transformative
effects
in
multiple
knowledge
industries.
To
better
understand
how
workers
expect
generative
may
affect
their
industries
the
future,
we
conducted
participatory
research
workshops
for
seven
different
industries,
with
a
total
of
54
participants
across
three
US
cities.
We
describe
participants'
expectations
AI's
impact,
including
dominant
narrative
that
cut
groups'
discourse:
largely
envision
as
tool
perform
menial
work,
under
human
review.
Participants
do
not
generally
anticipate
disruptive
changes
currently
projected
common
media
and
academic
narratives.
however
amplify
four
social
forces
shaping
industries:
deskilling,
dehumanization,
disconnection,
disinformation.
these
forces,
then
provide
additional
detail
regarding
attitudes
specific
conclude
discussion
implications
challenges
HCI
community.