Media Education,
Journal Year:
2024,
Volume and Issue:
15(2), P. 7 - 20
Published: Dec. 30, 2024
The
study
examines
the
transformative
potential
impact
of
Generative
AI
(GAI)
on
society,
media,
and
media
education,
focusing
challenges
opportunities
these
advancements
bring.
GAI
technologies,
particularly
large
language
models
(LLMs)
like
GPT-4,
are
revolutionizing
content
creation,
platforms,
interaction
within
landscape.
This
radical
shift
is
generating
both
innovative
educational
methodologies
in
maintaining
academic
integrity
quality
learning.
aims
to
provide
a
comprehensive
understanding
how
impacts
education
by
reshaping
traditional
practices
media-related
higher
education.
research
delves
into
three
main
questions:
nature
as
an
innovation,
its
effect
knowledge
acquisition,
implications
for
It
introduces
critical
concepts
such
uncertainty,
which
refers
unpredictable
outcomes
GAI,
making
forecasting
planning
challenging.
paper
utilizes
McLuhan’s
tetrad
analyze
GAI’s
role
questioning
what
it
enhances
or
obsoletes,
retrieves,
reverses
when
pushed
extremes.
theoretical
approach
helps
multifaceted
influence
Overall,
underscores
dual-edged
where
presents
significant
enhancements
learning
creation
while
simultaneously
posing
risks
related
misinformation,
integrity,
dilution
human-centered
practices.
calls
balanced
integrating
advocating
preparedness
against
drawbacks
leveraging
capabilities
revolutionize
paradigms.
Electronics,
Journal Year:
2024,
Volume and Issue:
13(12), P. 2255 - 2255
Published: June 8, 2024
The
rapid
evolution
of
large
language
models,
in
particular
OpenAI’s
GPT-3.5-turbo
and
GPT-4,
indicates
a
growing
interest
advanced
computational
methodologies.
This
paper
proposes
novel
approach
to
synthetic
data
generation
knowledge
distillation
through
prompt
engineering.
potential
models
(LLMs)
is
used
address
the
problem
unbalanced
training
datasets
for
other
machine
learning
models.
not
only
common
issue
but
also
crucial
determinant
final
model
quality
performance.
Three
prompting
strategies
have
been
considered:
basic,
composite,
similarity
prompts.
Although
initial
results
do
match
performance
comprehensive
datasets,
prompts
method
exhibits
considerable
promise,
thus
outperforming
methods.
investigation
our
rebalancing
methods
opens
pathways
future
research
on
leveraging
continuously
developed
LLMs
enhanced
high-quality
data.
could
an
impact
many
large-scale
engineering
applications.
International Journal of Ethics and Systems,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Sept. 3, 2024
Purpose
The
purpose
of
this
study
is
to
comprehensively
examine
the
ethical
implications
surrounding
generative
artificial
intelligence
(AI).
Design/methodology/approach
Leveraging
a
novel
methodological
approach,
curates
corpus
364
documents
from
Scopus
spanning
2022
2024.
Using
term
frequency-inverse
document
frequency
(TF-IDF)
and
structural
topic
modeling
(STM),
it
quantitatively
dissects
thematic
essence
discourse
in
AI
across
diverse
domains,
including
education,
healthcare,
businesses
scientific
research.
Findings
results
reveal
range
concerns
various
sectors
impacted
by
AI.
In
academia,
primary
focus
on
issues
authenticity
intellectual
property,
highlighting
challenges
AI-generated
content
maintaining
academic
integrity.
healthcare
sector,
emphasis
shifts
medical
decision-making
patient
privacy,
reflecting
about
reliability
security
advice.
also
uncovers
significant
discussions
educational
financial
settings,
demonstrating
broad
impact
societal
professional
practices.
Research
limitations/implications
This
provides
foundation
for
crafting
targeted
guidelines
regulations
AI,
informed
systematic
analysis
using
STM.
It
highlights
need
dynamic
governance
continual
monitoring
AI’s
evolving
landscape,
offering
model
future
research
policymaking
fields.
Originality/value
introduces
unique
combination
TF-IDF
STM
analyze
large
corpus,
new
insights
into
multiple
domains.
Applied Sciences,
Journal Year:
2025,
Volume and Issue:
15(2), P. 631 - 631
Published: Jan. 10, 2025
Qualitative
data
analysis
(QDA)
tools
are
essential
for
extracting
insights
from
complex
datasets.
This
study
investigates
researchers’
perceptions
of
the
usability,
user
experience
(UX),
mental
workload,
trust,
task
complexity,
and
emotional
impact
three
tools:
Taguette
1.4.1
(a
traditional
QDA
tool),
ChatGPT
(GPT-4,
December
2023
version),
Gemini
(formerly
Google
Bard,
version).
Participants
(N
=
85),
Master’s
students
Faculty
Electrical
Engineering
Computer
Science
with
prior
in
UX
evaluations
familiarity
AI-based
chatbots,
performed
sentiment
annotation
tasks
using
these
tools,
enabling
a
comparative
evaluation.
The
results
show
that
AI
were
associated
lower
cognitive
effort
more
positive
responses
compared
to
Taguette,
which
caused
higher
frustration
especially
during
cognitively
demanding
tasks.
Among
achieved
highest
usability
score
(SUS
79.03)
was
rated
positively
engagement.
Trust
levels
varied,
preferred
accuracy
confidence.
Despite
differences,
all
consistently
identifying
qualitative
patterns.
These
findings
suggest
AI-driven
can
enhance
experiences
while
emphasizing
need
align
tool
selection
specific
preferences.
Electronics,
Journal Year:
2024,
Volume and Issue:
13(2), P. 261 - 261
Published: Jan. 5, 2024
Concepts
empower
cognitive
intelligence.
Extracting
flat,
nested,
and
discontinuous
name
entities
concept
mentions
from
natural
language
texts
is
significant
for
downstream
tasks
such
as
knowledge
graphs.
Among
the
algorithms
that
uniformly
detect
these
types
of
concepts,
Li
et
al.
proposed
a
novel
architecture
by
modeling
unified
mention
recognition
classification
word–word
relations,
named
W2NER,
achieved
state-of-the-art
(SOTA)
results
in
2022.
However,
there
still
room
improvement.
This
paper
presents
three
improvements
based
on
W2NER.
We
enhanced
grid-tagging
network
demonstration
learning
tag
attention
feature
extraction,
so
our
modified
model
DTaE.
Firstly,
addressing
issue
insufficient
semantic
information
short
lack
annotated
data,
inspired
GPT-3,
searched
during
training
phase
according
to
certain
strategy
enhance
input
features
improve
model’s
ability
few-shot
learning.
Secondly,
tackle
problem
W2NER’s
subpar
accuracy
multi-head
mechanism
employed
capture
scores
different
positions
grid
tagging.
Then,
tagging
are
embedded
into
model.
Finally,
retain
about
sequence
position,
rotary
position
embedding
introduced
ensure
robustness.
selected
an
authoritative
Chinese
dictionary
adopted
five-person
annotation
method
annotate
multiple
concepts
definitions.
To
validate
effectiveness
model,
experiments
were
conducted
public
dataset
CADEC
dataset:
dataset,
with
slight
decrease
recall
rate,
precision
improved
2.78%,
comprehensive
metric
F1
increased
0.89%;
2.97%,
rate
2.35%,
2.66%.
Risk Analysis,
Journal Year:
2024,
Volume and Issue:
unknown
Published: Dec. 15, 2024
Abstract
Recent
developments
in
risk
and
crisis
communication
(RCC)
research
combine
social
science
theory
data
tools
to
construct
effective
messages
efficiently.
However,
current
systematic
literature
reviews
(SLRs)
on
RCC
primarily
focus
computationally
assessing
message
efficacy
as
opposed
efficiency.
We
conduct
an
SLR
highlight
any
computational
methods
that
improve
construction
found
most
focuses
using
theoretical
frameworks
analyze
or
classify
elements
efficacy.
For
improving
efficiency,
manual
are
only
used
classification.
Specifying
the
is
sparse.
recommend
future
apply
toward
efficiency
construction.
By
messaging
would
quickly
warn
better
inform
affected
communities
impacted
by
hazards.
Such
has
potential
save
many
lives
possible.
La Palabra,
Journal Year:
2024,
Volume and Issue:
48, P. 1 - 18
Published: Dec. 2, 2024
Con
el
frenético
avance
de
la
inteligencia
artificial
(IA),
se
ponen
manifiesto
las
múltiples
funcionalidades
que
esta
puede
tener
en
distintos
sectores,
incluida
producción
literatura
infantil.
Se
pretende
analizar
cómo
IA
promueve
valores
y
representaciones
género
narraciones
creadas
para
infancia.
Mediante
una
metodología
exploratoria
cualitativa,
contrastan
narrativas
generadas
por
dos
aplicaciones
disponibles
plataforma
Product
Hunter,
emplean
IA,
con
cuatro
modelos
Large
Language
Models,
a
partir
un
mismo
prompt.
Los
resultados
muestran
configura
como
herramienta
poderosa
promover
no
sexistas
e
inclusivas
generar
relatos
infancia
desafíen
estereotipos
promuevan
diversas
género.
No
obstante,
concluye
hace
necesaria
colaboración
entre
desarrolladores
especialistas
infantil
estudiosos
formar
generación
más
consciente
tolerante
diversidad.
Mathematics,
Journal Year:
2024,
Volume and Issue:
12(4), P. 521 - 521
Published: Feb. 7, 2024
Fine-tuning
a
pre-trained
sequence-to-sequence-based
language
model
has
significantly
advanced
the
field
of
abstractive
summarization.
However,
early
models
summarization
were
limited
by
gap
between
training
and
inference,
they
did
not
fully
utilize
potential
model.
Recent
studies
have
introduced
two-stage
framework
that
allows
second-stage
to
re-rank
candidate
summary
generated
first-stage
model,
resolve
these
limitations.
In
this
study,
we
point
out
supervision
method
performed
in
existing
re-ranking
cannot
learn
detailed
complex
information
data.
addition,
present
problem
positional
bias
encoder–decoder-based
To
address
two
limitations,
study
proposes
hierarchical
jointly
performs
sentence-level
supervision.
For
supervision,
designed
loss
functions:
intra-
inter-intra-sentence
ranking
losses.
Compared
proposed
exhibited
performance
improvement
for
both
CNN/DM
XSum
datasets.
The
outperformed
baseline
under
few-shot
setting.
ACM Transactions on Software Engineering and Methodology,
Journal Year:
2024,
Volume and Issue:
unknown
Published: June 12, 2024
Data
flow
graphs
(DFGs)
capture
definitions
(defs)
and
uses
across
program
blocks,
which
is
a
fundamental
representation
for
analysis,
testing
maintenance.
However,
dynamically-typed
programming
languages
like
Python
present
implicit
data
issues
that
make
it
challenging
to
determine
def-use
information
at
compile
time.
Static
analysis
methods
Soot
WALA
are
inadequate
handling
these
issues,
manually
enumerating
comprehensive
heuristic
rules
impractical.
Large
pre-trained
language
models
(LLMs)
offer
potential
solution,
as
they
have
powerful
understanding
pattern
matching
abilities,
allowing
them
predict
by
analyzing
code
context
relationships
between
variables,
functions,
statements
in
code.
We
propose
leveraging
LLMs’
in-context
learning
ability
learn
patterns
from
contextual
solve
problems.
To
further
enhance
the
accuracy
of
LLMs,
we
design
five-step
Chain
Thought
(CoT)
break
down
into
an
AI
chain,
with
each
step
corresponding
separate
unit
generate
accurate
DFGs
Our
approach’s
performance
thoroughly
assessed,
demonstrating
effectiveness
Chain.
Compared
static
our
method
achieves
82%
higher
def
coverage
58%
use
DFG
generation
on
flow.
also
prove
indispensability
Overall,
approach
offers
promising
direction
building
software
engineering
tools
utilizing
foundation
models,
eliminating
significant
maintenance
effort,
but
focusing
identifying
problems
solve.
Frontiers in Medicine,
Journal Year:
2024,
Volume and Issue:
11
Published: Oct. 16, 2024
Background
The
large-scale
language
model,
GPT-4-1106-preview,
supports
text
of
up
to
128
k
characters,
which
has
enhanced
the
capability
processing
vast
quantities
text.
This
model
can
perform
efficient
and
accurate
data
mining
without
need
for
retraining,
aided
by
prompt
engineering.
Method
research
approach
includes
engineering
vectorization
processing.
In
this
study,
is
applied
assist
ChatGPT
in
mining.
Subsequently,
mined
results
are
vectorized
incorporated
into
a
local
knowledge
base.
After
cleansing
306
medical
papers,
extraction
was
performed
using
ChatGPT.
Following
validation
filtering
process,
241
case
entries
were
obtained,
leading
construction
Additionally,
drawing
upon
Langchain
framework
utilizing
base
conjunction
with
ChatGPT,
we
successfully
developed
fast
reliable
chatbot.
chatbot
capable
providing
recommended
diagnostic
treatment
information
various
diseases.
Results
performance
designed
from
base,
exceeded
that
original
7.90%
on
set
questions.
Conclusion
assisted
engineering,
demonstrates
effective
capabilities
texts.
future,
plan
incorporate
richer
array
data,
expand
scale
enhance
ChatGPT’s
field.