Stroke,
Journal Year:
2024,
Volume and Issue:
55(10), P. 2573 - 2578
Published: Sept. 3, 2024
Artificial
intelligence
(AI)
large
language
models
(LLMs)
now
produce
human-like
general
text
and
images.
LLMs'
ability
to
generate
persuasive
scientific
essays
that
undergo
evaluation
under
traditional
peer
review
has
not
been
systematically
studied.
To
measure
perceptions
of
quality
the
nature
authorship,
we
conducted
a
competitive
essay
contest
in
2024
with
both
human
AI
participants.
Human
authors
4
distinct
LLMs
generated
on
controversial
topics
stroke
care
outcomes
research.
A
panel
PLoS ONE,
Journal Year:
2023,
Volume and Issue:
18(10), P. e0292216 - e0292216
Published: Oct. 5, 2023
Objective
ChatGPT
is
the
first
large
language
model
(LLM)
to
reach
a
large,
mainstream
audience.
Its
rapid
adoption
and
exploration
by
population
at
has
sparked
wide
range
of
discussions
regarding
its
acceptable
optimal
integration
in
different
areas.
In
hybrid
(virtual
in-person)
panel
discussion
event,
we
examined
various
perspectives
use
education,
research,
healthcare.
Materials
methods
We
surveyed
in-person
online
attendees
using
an
audience
interaction
platform
(Slido).
quantitatively
analyzed
received
responses
on
questions
about
contexts.
compared
pairwise
categorical
groups
with
Fisher’s
Exact.
Furthermore,
used
qualitative
analyze
code
discussions.
Results
420
from
estimated
844
participants
(response
rate
49.7%).
Only
40%
had
tried
ChatGPT.
More
trainees
faculty.
Those
who
were
more
interested
it
wider
contexts
going
forwards.
Of
three
discussed
contexts,
greatest
uncertainty
was
shown
education.
Pros
cons
raised
during
for
this
technology
Discussion
There
around
uses
healthcare,
still
much
acceptability
uses.
respondents
roles
(trainee
vs
faculty
staff).
needed
explore
perceptions
LLMs
such
as
vital
sectors
healthcare
research.
Given
involved
risks
unforeseen
challenges,
taking
thoughtful
measured
approach
would
reduce
likelihood
harm.
Information,
Journal Year:
2024,
Volume and Issue:
15(2), P. 99 - 99
Published: Feb. 8, 2024
GPT
(Generative
Pre-trained
Transformer)
represents
advanced
language
models
that
have
significantly
reshaped
the
academic
writing
landscape.
These
sophisticated
offer
invaluable
support
throughout
all
phases
of
research
work,
facilitating
idea
generation,
enhancing
drafting
processes,
and
overcoming
challenges
like
writer’s
block.
Their
capabilities
extend
beyond
conventional
applications,
contributing
to
critical
analysis,
data
augmentation,
design,
thereby
elevating
efficiency
quality
scholarly
endeavors.
Strategically
narrowing
its
focus,
this
review
explores
alternative
dimensions
LLM
specifically
augmentation
generation
synthetic
for
research.
Employing
a
meticulous
examination
412
works,
it
distills
selection
77
contributions
addressing
three
questions:
(1)
on
Generating
Research
data,
(2)
Data
Analysis,
(3)
Design.
The
systematic
literature
adeptly
highlights
central
focus
encapsulating
48
pertinent
contributions,
extends
proactive
role
in
analysis
shaping
design.
Pioneering
comprehensive
classification
framework
“GPT’s
use
Data”,
study
classifies
existing
into
six
categories
14
sub-categories,
providing
profound
insights
multifaceted
applications
data.
This
meticulously
compares
54
pieces
literature,
evaluating
domains,
methodologies,
advantages
disadvantages,
scholars
with
crucial
seamless
integration
across
diverse
their
pursuits.
AI and Ethics,
Journal Year:
2024,
Volume and Issue:
unknown
Published: May 27, 2024
Using
artificial
intelligence
(AI)
in
research
offers
many
important
benefits
for
science
and
society
but
also
creates
novel
complex
ethical
issues.
While
these
issues
do
not
necessitate
changing
established
norms
of
science,
they
require
the
scientific
community
to
develop
new
guidance
appropriate
use
AI.
In
this
article,
we
briefly
introduce
AI
explain
how
it
can
be
used
research,
examine
some
raised
when
using
it,
offer
nine
recommendations
responsible
use,
including:
(1)
Researchers
are
identifying,
describing,
reducing,
controlling
AI-related
biases
random
errors;
(2)
should
disclose,
describe,
their
including
its
limitations,
language
that
understood
by
non-experts;
(3)
engage
with
impacted
communities,
populations,
other
stakeholders
concerning
obtain
advice
assistance
address
interests
concerns,
such
as
related
bias;
(4)
who
synthetic
data
(a)
indicate
which
parts
synthetic;
(b)
clearly
label
data;
(c)
describe
were
generated;
(d)
why
used;
(5)
systems
named
authors,
inventors,
or
copyright
holders
contributions
disclosed
described;
(6)
Education
mentoring
conduct
include
discussion
JCPP Advances,
Journal Year:
2024,
Volume and Issue:
4(2)
Published: April 23, 2024
Systematic
reviews
are
a
cornerstone
for
synthesizing
the
available
evidence
on
given
topic.
They
simultaneously
allow
gaps
in
literature
to
be
identified
and
provide
direction
future
research.
However,
due
ever-increasing
volume
complexity
of
literature,
traditional
methods
conducting
systematic
less
efficient
more
time-consuming.
Numerous
artificial
intelligence
(AI)
tools
being
released
with
potential
optimize
efficiency
academic
writing
assist
various
stages
review
process
including
developing
refining
search
strategies,
screening
titles
abstracts
inclusion
or
exclusion
criteria,
extracting
essential
data
from
studies
summarizing
findings.
Therefore,
this
article
we
an
overview
currently
how
they
can
incorporated
into
improve
quality
research
synthesis.
We
emphasize
that
authors
must
report
all
AI
have
been
used
at
each
stage
ensure
replicability
as
part
reporting
methods.
Applied Sciences,
Journal Year:
2025,
Volume and Issue:
15(3), P. 1666 - 1666
Published: Feb. 6, 2025
Maritime
operations
play
a
critical
role
in
global
trade
but
face
persistent
safety
challenges
due
to
human
error,
environmental
factors,
and
operational
complexities.
This
review
explores
the
transformative
potential
of
Large
Language
Models
(LLMs)
enhancing
maritime
through
improved
communication,
decision-making,
compliance.
Specific
applications
include
multilingual
communication
for
international
crews,
automated
reporting,
interactive
training,
real-time
risk
assessment.
While
LLMs
offer
innovative
solutions,
such
as
data
privacy,
integration,
ethical
considerations
must
be
addressed.
concludes
with
actionable
recommendations
insights
leveraging
build
safer
more
resilient
systems.
Journal of Medical Internet Research,
Journal Year:
2023,
Volume and Issue:
25, P. e51584 - e51584
Published: Aug. 31, 2023
The
ethics
of
generative
artificial
intelligence
(AI)
use
in
scientific
manuscript
content
creation
has
become
a
serious
matter
concern
the
publishing
community.
Generative
AI
computationally
capable
elaborating
research
questions;
refining
programming
code;
generating
text
language;
and
images,
graphics,
or
figures.
However,
this
technology
should
be
used
with
caution.
In
editorial,
we
outline
current
state
editorial
policies
on
chatbot
authorship,
peer
review,
processing
scholarly
manuscripts.
Additionally,
provide
JMIR
Publications'
these
issues.
We
further
detail
approach
to
applications
process
for
manuscripts
review
Publications
journal.
Information,
Journal Year:
2024,
Volume and Issue:
15(6), P. 325 - 325
Published: June 2, 2024
In
the
digital
age,
intersection
of
artificial
intelligence
(AI)
and
higher
education
(HE)
poses
novel
ethical
considerations,
necessitating
a
comprehensive
exploration
this
multifaceted
relationship.
This
study
aims
to
quantify
characterize
current
research
trends
critically
assess
discourse
on
AI
applications
within
HE.
Employing
mixed-methods
design,
we
integrated
quantitative
data
from
Web
Science,
Scopus,
Lens
databases
with
qualitative
insights
selected
studies
perform
scientometric
content
analyses,
yielding
nuanced
landscape
utilization
in
Our
results
identified
vital
areas
through
citation
bursts,
keyword
co-occurrence,
thematic
clusters.
We
provided
conceptual
model
for
integration
HE,
encapsulating
dichotomous
perspectives
AI’s
role
education.
Three
clusters
were
identified:
frameworks
policy
development,
academic
integrity
creation,
student
interaction
AI.
The
concludes
that,
while
offers
substantial
benefits
educational
advancement,
it
also
brings
challenges
that
necessitate
vigilant
governance
uphold
standards.
implications
extend
policymakers,
educators,
developers,
highlighting
need
guidelines,
literacy,
human-centered
tools.
The
use
of
Large
Language
Models
(LLMs)
for
writing
has
sparked
controversy
both
among
readers
and
writers.
On
one
hand,
writers
are
concerned
that
LLMs
will
deprive
them
agency
ownership,
about
spending
their
time
on
text
generated
by
soulless
machines.
the
other
AI-assistance
can
improve
as
long
conform
to
publisher
policies,
be
assured
a
been
verified
human.
We
argue
system
captures
provenance
interaction
with
an
LLM
help
retain
agency,
communicate
AI
publishers
transparently.
Thus
we
propose
HaLLMark,
tool
visualizing
writer's
LLM.
evaluated
HaLLMark
13
creative
writers,
found
it
helped
sense
control
ownership
text.
Advances
in
Generative
Artificial
Intelligence
(AI)
are
resulting
AI-generated
media
output
that
is
(nearly)
indistinguishable
from
human-created
content.
This
can
drastically
impact
users
and
the
sector,
especially
given
global
risks
of
misinformation.
While
currently
discussed
European
AI
Act
aims
at
addressing
these
through
Article
52's
transparency
obligations,
its
interpretation
implications
remain
unclear.
In
this
early
work,
we
adopt
a
participatory
approach
to
derive
key
questions
based
on
disclosure
obligations.
We
ran
two
workshops
with
researchers,
designers,
engineers
across
disciplines
(N=16),
where
participants
deconstructed
relevant
clauses
using
5W1H
framework.
contribute
set
149
clustered
into
five
themes
18
sub-themes.
believe
not
only
help
inform
future
legal
developments
interpretations
52,
but
also
provide
starting
point
for
Human-Computer
Interaction
research
(re-)examine
human-centered
lens.