DASNeRF: depth consistency optimization, adaptive sampling, and hierarchical structural fusion for sparse view neural radiance fields
Yongshuo Zhang,
No information about this author
Guangyuan Zhang,
No information about this author
Kefeng Li
No information about this author
et al.
PLoS ONE,
Journal Year:
2025,
Volume and Issue:
20(5), P. e0321878 - e0321878
Published: May 12, 2025
To
address
the
challenges
of
significant
detail
loss
in
Neural
Radiance
Fields
(NeRF)
under
sparse-view
input
conditions,
this
paper
proposes
DASNeRF
framework.
aims
to
generate
high-detail
novel
views
from
a
limited
number
viewpoints.
limitations
few-shot
NeRF,
including
insufficient
depth
information
and
loss,
introduces
accurate
priors
employs
constraint
strategy
combining
relative
ordering
fidelity
regularization
structural
consistency
regularization.
These
methods
ensure
reconstruction
accuracy
even
with
sparse
views.
The
provide
high-quality
data
through
more
monocular
estimation
model,
enhancing
capability
stability
model.
guides
network
learn
relationships
using
local
ranking
priors,
reducing
blurring
caused
by
inaccurate
estimation.
Depth
maintains
global
enforcing
continuity
across
neighboring
pixels.
strategies
enhance
DASNeRF’s
performance
complex
scenes,
making
3D
natural.
In
addition,
we
utilize
three-layer
optimal
sampling
strategy,
consisting
coarse
sampling,
optimized
fine
during
process
better
capture
details
key
regions.
phase,
point
density
regions
is
adaptively
increased
while
low-priority
regions,
accuracy.
alleviate
overfitting,
proposed
an
MLP
structure
per-layer
fusion.
This
design
preserves
model’s
perception
ability
effectively
avoids
overfitting.
Specifically,
each
layer’s
includes
output
features
previous
layer
incorporates
processed
five-dimensional
information,
further
reconstruction.
Experimental
results
show
that
outperforms
state-of-the-art
on
LLFF
DTU
dataset,
achieving
metrics
such
as
PSNR,
SSIM,
LPIPS.
reconstructed
visual
quality
are
significantly
improved,
demonstrating
potential
conditions.
Language: Английский
Rapid depth estimation based on a key electrowetting liquid-lens with electrically adjusted imaging focus
Chunyu Zhang,
No information about this author
Yuxiang Xu,
No information about this author
Xinyu Zhang
No information about this author
et al.
Review of Scientific Instruments,
Journal Year:
2025,
Volume and Issue:
96(5)
Published: May 1, 2025
An
effective
method
for
rapidly
performing
depth
estimation
using
a
type
of
electrowetting
liquid-lens
is
proposed.
The
architectured
by
directly
coupling
cylindrical
copper
sidewall
electrode
and
top
ITO
electrode,
leading
to
dual-mode
lens
adjusted
electrically,
including
beam
diverging
mode
converging
mode,
also
an
intermedium
phase
retard
state.
By
increasing
the
applied
signal
voltage
from
0
120
V,
focus
presents
wide
dynamic
range
(-∞,
−128.6
mm)
∪
(45.6
mm,
+∞).
key
performances
liquid-lens,
such
as
tunable
electrically
element
response
duration
less
than
5
ms,
are
evaluated
experimentally.
sweeping
over
coupled
with
arrayed
CMOS
sensor
form
imaging
setup,
sequence
images
focal
stack
acquired.
Considering
character
field
equipment
mainly
based
on
can
be
remarkably
extended
further
utilizing
effect
in
transition
region
between
positive
negative
focus.
A
rapid
algorithm
aligning
then
eliminating
scene
parallax
achieved.
Language: Английский
Fully Self-Supervised Depth Estimation from Defocus Clue
Haozhe Si,
No information about this author
Bin Zhao,
No information about this author
Dong Wang
No information about this author
et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
Journal Year:
2023,
Volume and Issue:
unknown, P. 9140 - 9149
Published: June 1, 2023
Depth-from-defocus
(DFD),
modeling
the
relationship
between
depth
and
defocus
pattern
in
images,
has
demonstrated
promising
performance
estimation.
Recently,
several
self-supervised
works
try
to
overcome
difficulties
acquiring
accurate
ground-truth.
However,
they
depend
on
all-in-focus
(AIF)
which
cannot
be
captured
real-world
scenarios.
Such
limitation
discourages
applications
of
DFD
methods.
To
tackle
this
issue,
we
propose
a
completely
framework
that
estimates
purely
from
sparse
focal
stack.
We
show
our
circumvents
needs
for
AIF
image
ground-truth,
receives
superior
predictions,
thus
closing
gap
theoretical
success
their
real
world.
In
particular,
(i)
more
realistic
setting
tasks,
where
no
or
ground-truth
is
available;
(ii)
novel
self-
supervision
provides
reliable
predictions
under
challenging
setting.
The
proposed
uses
neural
model
predict
image,
utilizes
an
optical
validate
refine
prediction.
verify
three
benchmark
datasets
with
rendered
stacks
stacks.
Qualitative
quantitative
evaluations
method
strong
baseline
supervised
tasks.
source
code
publicly
avail-
able
at
https://github.com/Ehzoahis/DEReD.
Language: Английский
Spatially varying defocus map estimation from a single image based on spatial aliasing sampling method
Optics Express,
Journal Year:
2024,
Volume and Issue:
32(6), P. 8959 - 8959
Published: Feb. 8, 2024
In
current
optical
systems,
defocus
blur
is
inevitable
due
to
the
constrained
depth
of
field.
However,
it
difficult
accurately
identify
amount
at
each
pixel
position
as
point
spread
function
changes
spatially.
this
paper,
we
introduce
a
histogram-invariant
spatial
aliasing
sampling
method
for
reconstructing
all-in-focus
images,
which
addresses
challenge
insufficient
pixel-level
annotated
samples,
and
subsequently
introduces
high-resolution
network
estimating
spatially
varying
maps
from
single
image.
The
accuracy
proposed
evaluated
on
various
synthetic
real
data.
experimental
results
demonstrate
that
our
model
outperforms
state-of-the-art
methods
map
estimation
significantly.
Language: Английский
2HDED:Net for Joint Depth Estimation and Image Deblurring from a Single Out-of-Focus Image
2022 IEEE International Conference on Image Processing (ICIP),
Journal Year:
2022,
Volume and Issue:
unknown, P. 2006 - 2010
Published: Oct. 16, 2022
Depth
estimation
and
all-in-focus
image
restoration
from
defocused
RGB
images
are
related
problems,
although
most
of
the
existing
methods
address
them
separately.
The
few
approaches
that
solve
both
problems
use
a
pipeline
processing
to
derive
depth
or
defocus
map
as
an
intermediary
product
serves
support
for
deblurring,
which
remains
primary
goal.
In
this
paper,
we
propose
new
Deep
Neural
Network
(DNN)
architecture
performs
in
parallel
tasks
by
attaching
same
importance.
Our
Two-headed
Estimation
Deblurring
(2HDED:NET)
is
encoder-decoder
network
Defocus
(DFD)
extended
with
deblurring
branch,
sharing
encoder.
tested
on
NYU-Depth
V2
dataset
compared
several
state-of-the-art
deblurring.
Language: Английский
HI-Net: Boosting Self-Supervised Indoor Depth Estimation via Pose Optimization
IEEE Robotics and Automation Letters,
Journal Year:
2022,
Volume and Issue:
8(1), P. 224 - 231
Published: Nov. 24, 2022
Pose
estimation
plays
a
critical
role
in
self-supervised
monocular
depth
for
indoor
scenes,
especially
those
involving
complex
ego-motion.
In
this
letter,
we
leverage
the
two-view
geometry
constraints
into
pose
to
boost
accuracy
of
estimation,
which
ultimately
improves
performance
estimation.
Specifically,
decompose
two
steps:
initial
homography
and
iterative
residual
refinement.
We
first
introduce
Homography
Estimation
Module
(HEM)
estimate
large
3-DoF
rotations.
Then,
refine
6-DoF
with
an
Iterative
Residual
Refinement
(IRM).
Finally,
supervision
signal
is
generated
refined
used
training
DepthNet.
Experiments
on
NYU
V2
dataset
show
that
our
approach
significantly
DepthNet,
proposed
method
achieves
state-of-the-art
results.
Furthermore,
experiments
ScanNet
demonstrate
generalization
ability
both
Language: Английский
Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware Depth-from-Defocus
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
Journal Year:
2024,
Volume and Issue:
unknown, P. 2560 - 2564
Published: March 18, 2024
In
this
paper,
we
address
the
task
of
aberration-aware
depth-from-
defocus
(DfD),
which
takes
account
spatially
variant
point
spread
functions
(PSFs)
a
real
camera.
To
effectively
obtain
PSFs
camera
without
requiring
any
ground-truth
PSFs,
propose
novel
self-supervised
learning
method
that
leverages
pair
sharp
and
blurred
images,
can
be
easily
captured
by
changing
aperture
setting
our
PSF
estimation,
assume
rotationally
symmetric
introduce
polar
coordinate
system
to
more
accurately
learn
estimation
network.
We
also
handle
focus
breathing
phenomenon
occurs
in
DfD
situations.
Experimental
results
on
synthetic
data
demonstrate
effectiveness
regarding
both
depth
estimation.
Language: Английский
Camera-Independent Single Image Depth Estimation from Defocus Blur
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV),
Journal Year:
2024,
Volume and Issue:
unknown, P. 3737 - 3746
Published: Jan. 3, 2024
Monocular
depth
estimation
is
an
important
step
in
many
downstream
tasks
machine
vision.
We
address
the
topic
of
estimating
monocular
from
defocus
blur
which
can
yield
more
accurate
results
than
semantic
based
methods.
The
existing
techniques
are
sensitive
to
particular
camera
that
images
taken
from.
show
how
several
camera-related
parameters
affect
using
optical
physics
equations
and
they
make
depend
on
these
parameters.
simple
correction
procedure
we
propose
alleviate
this
problem
does
not
require
any
retraining
original
model.
created
a
synthetic
dataset
be
used
test
independent
performance
models.
evaluate
our
model
both
real
datasets
(DDFF12
NYU
V2)
obtained
with
different
cameras
methods
significantly
robust
changes
cameras.
Code:
https://github.com/sleekEagle/defocus_camind.git
Language: Английский
Are Realistic Training Data Necessary for Depth-from-Defocus Networks?
IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society,
Journal Year:
2022,
Volume and Issue:
unknown, P. 1 - 6
Published: Oct. 17, 2022
Image-based
depth
estimation
is
one
of
the
important
tasks
in
computer
vision.
Depth-from-defocus
(DfD)
methods
estimate
scene
from
a
single
or
multiple
defocused
images
by
exploiting
depth-dependent
defocus
blur
cues.
Because
difficulty
obtaining
real-world
dataset
with
ground-truth
depth,
most
deep-learning-based
DfD
rely
on
synthetic
training
dataset,
where
more
realistic
rendering
considered
desirable
for
accurate
estimation.
In
this
paper,
we
consider
if
3D
objects
are
really
necessary
networks.
To
investigate
this,
design
very
simple
and
fast
data
generation
method
using
only
two
front-parallel
texture
planes
compare
it
widely-applied
path-tracing
common
object
dataset.
Through
experiments,
show
that
2-plane
provides
comparable
even
slightly
better
performance
than
can
be
as
an
alternative
practical
network
training.
Language: Английский
Measuring Focus Quality in Vector Valued Images for Shape from Focus
2022 26th International Conference on Pattern Recognition (ICPR),
Journal Year:
2022,
Volume and Issue:
unknown
Published: Aug. 21, 2022
In
shape
from
focus
(SFF)
methods,
the
measure
(FM)
operator
plays
a
key
role
in
determining
ultimate
of
object.
Usually,
vector-valued
(color)
images
are
converted
into
grayscale
before
applying
FM
operator.
This
conversion
saves
computations;
however,
it
affects
accuracy
values
which
deteriorates
depth
map.
paper
proposes
an
effective
to
find
relative
degree
for
pixels
image
sequence.
first
step,
transformed
scalar-valued
by
computing
scaled
norm
resultant
vector
differences.
The
scaling
factor
is
computed
through
various
features
based
on
operations
including
dot
product,
cross
projections,
and
vectors
distances.
Then
differential
kernels
with
gap
applied
scalar
compute
values.
Experiments
conducted
using
synthetic
real
sequences
reveal
that
proposed
method
providing
better
quality
3D
shapes
objects.
Language: Английский