IEEE Internet of Things Journal,
Год журнала:
2020,
Номер
7(8), С. 7457 - 7469
Опубликована: Апрель 1, 2020
Along
with
the
rapid
developments
in
communication
technologies
and
surge
use
of
mobile
devices,
a
brand-new
computation
paradigm,
Edge
Computing,
is
surging
popularity.
Meanwhile,
Artificial
Intelligence
(AI)
applications
are
thriving
breakthroughs
deep
learning
many
improvements
hardware
architectures.
Billions
data
bytes,
generated
at
network
edge,
put
massive
demands
on
processing
structural
optimization.
Thus,
there
exists
strong
demand
to
integrate
Computing
AI,
which
gives
birth
Intelligence.
In
this
paper,
we
divide
into
AI
for
edge
(Intelligence-enabled
Computing)
(Artificial
Edge).
The
former
focuses
providing
more
optimal
solutions
key
problems
help
popular
effective
while
latter
studies
how
carry
out
entire
process
building
models,
i.e.,
model
training
inference,
edge.
This
paper
provides
insights
new
inter-disciplinary
field
from
broader
perspective.
It
discusses
core
concepts
research
road-map,
should
provide
necessary
background
potential
future
initiatives
IEEE Transactions on Wireless Communications,
Год журнала:
2020,
Номер
20(3), С. 1935 - 1949
Опубликована: Ноя. 20, 2020
In
this
paper,
the
problem
of
energy
efficient
transmission
and
computation
resource
allocation
for
federated
learning
(FL)
over
wireless
communication
networks
is
investigated.
considered
model,
each
user
exploits
limited
local
computational
resources
to
train
a
FL
model
with
its
collected
data
and,
then,
sends
trained
base
station
(BS)
which
aggregates
broadcasts
it
back
all
users.
Since
involves
an
exchange
between
users
BS,
both
latencies
are
determined
by
accuracy
level.
Meanwhile,
due
budget
users,
must
be
during
process.
This
joint
formulated
as
optimization
whose
goal
minimize
total
consumption
system
under
latency
constraint.
To
solve
problem,
iterative
algorithm
proposed
where,
at
every
step,
closed-form
solutions
time
allocation,
bandwidth
power
control,
frequency,
derived.
requires
initial
feasible
solution,
we
construct
completion
minimization
bisection-based
obtain
optimal
solution
original
problem.
Numerical
results
show
that
algorithms
can
reduce
up
59.5%
compared
conventional
method.
IEEE Transactions on Wireless Communications,
Год журнала:
2019,
Номер
19(1), С. 491 - 506
Опубликована: Окт. 15, 2019
To
leverage
rich
data
distributed
at
the
network
edge,
a
new
machine-learning
paradigm,
called
edge
learning,
has
emerged
where
learning
algorithms
are
deployed
for
providing
intelligent
services
to
mobile
users.
While
computing
speeds
advancing
rapidly,
communication
latency
is
becoming
bottleneck
of
fast
learning.
address
this
issue,
work
focused
on
designing
low-latency
multi-access
scheme
end,
we
consider
popular
privacy-preserving
framework,
federated
(FEEL),
global
AI-model
an
edge-server
updated
by
aggregating
(averaging)
local
models
trained
devices.
It
proposed
that
updates
simultaneously
transmitted
devices
over
broadband
channels
should
be
analog
aggregated
“over-the-air”
exploiting
waveform-superposition
property
channel.
Such
aggregation
(BAA)
results
in
dramatical
communication-latency
reduction
compared
with
conventional
orthogonal
access
(i.e.,
OFDMA).
In
work,
effects
BAA
performance
quantified
targeting
single-cell
random
network.
First,
derive
two
tradeoffs
between
communication-and-learning
metrics,
which
useful
planning
and
optimization.
The
power
control
(“truncated
channel
inversion”)
required
tradeoff
update-reliability
[as
measured
receive
signal-to-noise
ratio
(SNR)]
expected
update-truncation
ratio.
Consider
scheduling
cell-interior
constrain
path
loss.
This
gives
rise
other
SNR
fraction
exploited
Next,
latency-reduction
respect
traditional
OFDMA
proved
scale
almost
linearly
device
population.
Experiments
based
neural
real
dataset
conducted
corroborating
theoretical
results.
IEEE Transactions on Signal Processing,
Год журнала:
2020,
Номер
68, С. 2155 - 2169
Опубликована: Янв. 1, 2020
We
study
federated
machine
learning
(ML)
at
the
wireless
edge,
where
power-
and
bandwidth-limited
devices
with
local
datasets
carry
out
distributed
stochastic
gradient
descent
(DSGD)
help
of
a
parameter
server
(PS).
Standard
approaches
assume
separate
computation
communication,
estimates
are
compressed
transmitted
to
PS
over
orthogonal
links.
Following
this
digital
approach,
we
introduce
D-DSGD,
in
which
employ
quantization
error
accumulation,
transmit
their
multiple
access
channel
(MAC).
then
novel
analog
scheme,
called
A-DSGD,
exploits
additive
nature
MAC
for
over-the-air
computation,
provide
convergence
analysis
approach.
In
first
sparsify
estimates,
project
them
lower
dimensional
space
imposed
by
available
bandwidth.
These
projections
sent
directly
without
employing
any
code.
Numerical
results
show
that
A-DSGD
converges
faster
than
D-DSGD
thanks
its
more
efficient
use
limited
bandwidth
natural
alignment
channel.
The
improvement
is
particularly
compelling
low
power
regimes.
also
illustrate
classification
problem
that,
robust
bias
data
distribution
across
devices,
while
significantly
outperforms
other
schemes
literature.
observe
both
perform
better
number
showing
ability
harnessing
edge
devices.
IEEE Internet of Things Journal,
Год журнала:
2021,
Номер
9(1), С. 1 - 24
Опубликована: Июль 6, 2021
Federated
learning
(FL)
is
a
distributed
machine
strategy
that
generates
global
model
by
from
multiple
decentralized
edge
clients.
FL
enables
on-device
training,
keeping
the
client's
local
data
private,
and
further,
updating
based
on
updates.
While
methods
offer
several
advantages,
including
scalability
privacy,
they
assume
there
are
available
computational
resources
at
each
edge-device/client.
However,
Internet-of-Things
(IoT)-enabled
devices,
e.g.,
robots,
drone
swarms,
low-cost
computing
devices
(e.g.,
Raspberry
Pi),
may
have
limited
processing
ability,
low
bandwidth
power,
or
storage
capacity.
In
this
survey
article,
we
propose
to
answer
question:
how
train
models
for
resource-constrained
IoT
devices?
To
end,
first
explore
existing
studies
FL,
relative
assumptions
implementation
using
their
drawbacks.
We
then
discuss
challenges
issues
when
applying
an
environment.
highlight
overview
of
provide
comprehensive
problem
statements
emerging
challenges,
particularly
during
within
heterogeneous
environments.
Finally,
point
out
future
research
directions
scientists
researchers
who
interested
in
working
intersection
IEEE Journal on Selected Areas in Communications,
Год журнала:
2020,
Номер
38(11), С. 2666 - 2682
Опубликована: Июль 3, 2020
Computation
off-loading
in
mobile
edge
computing
(MEC)
systems
constitutes
an
efficient
paradigm
of
supporting
resource-intensive
applications
on
devices.
However,
the
benefit
MEC
cannot
be
fully
exploited,
when
communications
link
used
for
computational
tasks
is
hostile.
Fortunately,
propagation-induced
impairments
may
mitigated
by
intelligent
reflecting
surfaces
(IRS),
which
are
capable
enhancing
both
spectral-
and
energy-efficiency.
Specifically,
IRS
comprises
controller
a
large
number
passive
elements,
each
impose
phase
shift
incident
signal,
thus
collaboratively
improving
propagation
environment.
In
this
paper,
beneficial
role
IRSs
investigated
systems,
where
single-antenna
devices
opt
fraction
their
to
node
via
multi-antenna
access
point
with
aid
IRS.
Pertinent
latency-minimization
problems
formulated
single-device
multi-device
scenarios,
subject
practical
constraints
imposed
capability
design.
To
solve
problem,
block
coordinate
descent
(BCD)
technique
invoked
decouple
original
problem
into
two
subproblems,
then
settings
alternatively
optimized
using
low-complexity
iterative
algorithms.
It
demonstrated
that
our
IRS-aided
system
significantly
outperforming
conventional
operating
without
IRSs.
Quantitatively,
about
20
%
latency
reduction
achieved
over
single
cell
300
m
radius
5
active
devices,
relying
5-antenna
point.
IEEE Journal on Selected Areas in Communications,
Год журнала:
2021,
Номер
40(1), С. 5 - 36
Опубликована: Ноя. 8, 2021
The
thriving
of
artificial
intelligence
(AI)
applications
is
driving
the
further
evolution
wireless
networks.
It
has
been
envisioned
that
6G
will
be
transformative
and
revolutionize
from
"connected
things"
to
intelligence".
However,
state-of-the-art
deep
learning
big
data
analytics
based
AI
systems
require
tremendous
computation
communication
resources,
causing
significant
latency,
energy
consumption,
network
congestion,
privacy
leakage
in
both
training
inference
processes.
By
embedding
model
capabilities
into
edge,
edge
stands
out
as
a
disruptive
technology
for
seamlessly
integrate
sensing,
communication,
computation,
intelligence,
thereby
improving
efficiency,
effectiveness,
privacy,
security
In
this
paper,
we
shall
provide
our
vision
scalable
trustworthy
with
integrated
design
strategies
decentralized
machine
models.
New
principles
networks,
service-driven
resource
allocation
optimization
methods,
well
holistic
end-to-end
system
architecture
support
described.
Standardization,
software
hardware
platforms,
application
scenarios
are
also
discussed
facilitate
industrialization
commercialization
systems.
IEEE Communications Surveys & Tutorials,
Год журнала:
2021,
Номер
23(2), С. 1342 - 1397
Опубликована: Янв. 1, 2021
The
communication
and
networking
field
is
hungry
for
machine
learning
decision-making
solutions
to
replace
the
traditional
model-driven
approaches
that
proved
be
not
rich
enough
seizing
ever-growing
complexity
heterogeneity
of
modern
systems
in
field.
Traditional
assume
existence
(cloud-based)
central
entities
are
charge
processing
data.
Nonetheless,
difficulty
accessing
private
data,
together
with
high
cost
transmitting
raw
data
entity
gave
rise
a
decentralized
approach
called
Federated
Learning.
main
idea
federated
perform
an
on-device
collaborative
training
single
model
without
having
share
any
third-party
entity.
Although
few
survey
articles
on
already
exist
literature,
motivation
this
stems
from
three
essential
observations.
first
one
lack
fine-grained
multi-level
classification
where
existing
surveys
base
their
only
criterion
or
aspect.
second
observation
focus
some
common
challenges,
but
disregard
other
aspects
such
as
reliable
client
selection,
resource
management
service
pricing.
third
explicit
straightforward
directives
researchers
help
them
design
future
overcome
state-of-the-art
research
gaps.
To
address
these
points,
we
provide
comprehensive
tutorial
its
associated
concepts,
technologies
approaches.
We
then
highlight
applications
directions
domain
networking.
Thereafter,
three-level
scheme
categorizes
literature
based
high-level
challenge
they
tackle.
Then,
classify
each
into
set
specific
low-level
challenges
foster
better
understanding
topic.
Finally,
provide,
within
challenge,
technique
used
particular
challenge.
For
category
desirable
criteria
aimed
community
innovative
efficient
solutions.
best
our
knowledge,
most
terms
techniques
it
covers
presents.
IEEE Sensors Journal,
Год журнала:
2021,
Номер
21(14), С. 16301 - 16314
Опубликована: Апрель 30, 2021
With
the
increase
of
COVID-19
cases
worldwide,
an
effective
way
is
required
to
diagnose
patients.
The
primary
problem
in
diagnosing
patients
shortage
and
reliability
testing
kits,
due
quick
spread
virus,
medical
practitioners
are
facing
difficulty
identifying
positive
cases.
second
real-world
share
data
among
hospitals
globally
while
keeping
view
privacy
concerns
organizations.
Building
a
collaborative
model
preserving
major
for
training
global
deep
learning
model.
This
paper
proposes
framework
that
collects
small
amount
from
different
sources
(various
hospitals)
trains
using
blockchain
based
federated
learning.
Blockchain
technology
authenticates
organization.
First,
we
propose
normalization
technique
deals
with
heterogeneity
as
gathered
having
kinds
CT
scanners.
Secondly,
use
Capsule
Network-based
segmentation
classification
detect
Thirdly,
design
method
can
collaboratively
train
privacy.
Additionally,
collected
real-life
data,
which
is,
open
research
community.
proposed
utilize
up-to-date
improves
recognition
computed
tomography
(CT)
images.
Finally,
our
results
demonstrate
better
performance
IEEE Journal on Selected Areas in Communications,
Год журнала:
2021,
Номер
39(12), С. 3579 - 3605
Опубликована: Окт. 6, 2021
The
next-generation
of
wireless
networks
will
enable
many
machine
learning
(ML)
tools
and
applications
to
efficiently
analyze
various
types
data
collected
by
edge
devices
for
inference,
autonomy,
decision
making
purposes.
However,
due
resource
constraints,
delay
limitations,
privacy
challenges,
cannot
offload
their
entire
datasets
a
cloud
server
centrally
training
ML
models
or
inference
To
overcome
these
distributed
techniques
have
been
proposed
as
means
collaboratively
train
without
raw
exchanges,
thus
reducing
the
communication
overhead
latency
well
improving
privacy.
deploying
over
faces
several
challenges
including
uncertain
environment
(e.g.,
dynamic
channel
interference),
limited
resources
transmit
power
radio
spectrum),
hardware
computational
power).
This
paper
provides
comprehensive
study
how
can
be
effectively
deployed
networks.
We
present
detailed
overview
emerging
paradigms,
federated
learning,
distillation,
multi-agent
reinforcement
learning.
For
each
framework,
we
first
introduce
motivation
it
Then,
literature
review
on
use
its
efficient
deployment.
then
an
illustrative
example
show
optimize
improve
performance.
Finally,
future
research
opportunities.
In
nutshell,
this
holistic
set
guidelines
deploy
broad
range
frameworks
real-world