Optimizing Energy-Efficient Task Offloading in Edge Computing: A Hybrid AI-Based Approach
Anwar Ahamed Shaikh,
No information about this author
Ignacio Carol,
No information about this author
Meenakshi
No information about this author
et al.
International Journal of Computational and Experimental Science and Engineering,
Journal Year:
2025,
Volume and Issue:
11(2)
Published: March 23, 2025
Edge
computing
has
emerged
as
a
pivotal
technology
for
managing
computational
workloads
in
latency-sensitive
applications
by
offloading
tasks
from
resource-constrained
Internet
of
Things
(IoT)
devices
to
nearby
edge
servers.
However,
optimizing
task
while
ensuring
energy
efficiency
remains
significant
challenge.
This
paper
proposes
Hybrid
AI-Based
Task
Offloading
(HATO)
model,
integrating
Reinforcement
Learning
(RL)
with
Deep
Neural
Networks
(DNNs)
dynamically
allocate
resources
minimizing
consumption.
The
HATO
framework
formulates
multi-objective
optimization
problem,
considering
factors
such
device
workload,
network
latency,
server
availability,
and
constraints.
Experimental
evaluations
demonstrate
that
the
proposed
model
achieves
27.3%
reduction
consumption,
19.6%
improvement
completion
time,
31.2%
enhancement
overall
utilization
compared
conventional
heuristic-based
methods.
reinforcement
learning
module
adapts
strategies
real-time,
optimal
load
balancing
latency.
Approach
outperforms
baseline
models
diverse
scenarios,
making
it
scalable
efficient
solution
next-generation
IoT
applications.
Language: Английский
Enhancing Cross Language for English-Telugu pairs through the Modified Transformer Model based Neural Machine Translation
Vaishnavi Sadula,
No information about this author
D. Ramesh
No information about this author
International Journal of Computational and Experimental Science and Engineering,
Journal Year:
2025,
Volume and Issue:
11(2)
Published: April 16, 2025
Cross-Language
Translation
(CLT)
refers
to
conventional
automated
systems
that
generate
translations
between
natural
languages
without
human
involvement.
As
the
most
of
resources
are
mostly
available
in
English,
multi-lingual
translation
is
badly
required
for
penetration
essence
education
deep
roots
society.
Neural
machine
(NMT)
one
such
intelligent
technique
which
usually
deployed
an
efficient
process
from
source
language
another
language.
But
these
NMT
techniques
substantially
requires
large
corpus
data
achieve
improved
process.
This
bottleneck
makes
apply
mid-resource
compared
its
dominant
English
counterparts.
Although
some
benefit
established
systems,
creating
low-resource
a
challenge
due
their
intricate
morphology
and
lack
non-parallel
data.
To
overcome
this
aforementioned
problem,
research
article
proposes
modified
transformer
architecture
improve
efficiency
NMT.
The
proposed
framework,
consist
Encoder-Decoder
enhanced
version
with
multiple
fast
feed
forward
networks
multi-headed
soft
attention
networks.
designed
extracts
word
patterns
parallel
during
training,
forming
English–Telugu
vocabulary
via
Kaggle,
effectiveness
evaluated
using
measures
like
Bilingual
Evaluation
Understudy
(BLEU),
character-level
F-score
(chrF)
Word
Error
Rate
(WER).
prove
excellence
model,
extensive
comparison
existing
architectures
performance
metrics
analysed.
Outcomes
depict
has
shown
improvised
by
achieving
BLEU
as
0.89
low
WER
when
models.
These
experimental
results
promise
strong
hold
further
experimentation
based
Language: Английский