Light Field Angular Super-Resolution via Spatial-Angular Correlation Extracted by Deformable Convolutional Network
D. Li,
No information about this author
Rui Zhong,
No information about this author
Yungang Yang
No information about this author
et al.
Sensors,
Journal Year:
2025,
Volume and Issue:
25(4), P. 991 - 991
Published: Feb. 7, 2025
Light
Field
Angular
Super-Resolution
(LFASR)
addresses
the
issue
where
(LF)
images
can
not
simultaneously
achieve
both
high
spatial
and
angular
resolution
due
to
limited
of
optical
sensors.
Since
Spatial-Angular
Correlation
(SAC)
features
are
closely
related
structure
LF
images,
its
accurate
complete
extraction
is
crucial
for
quality
reconstructed
by
LFASR
method
based
on
Deep
Neural
Networks
(DNNs).
In
low-angular
SAC
must
be
extracted
from
a
number
pixels
that
at
great
distance
each
other
exhibit
strong
correlations.
However,
existing
methods
DNNs
fail
extract
accurately
completely.
Due
receptive
field,
regular
Convolutional
(CNNs)
unable
capture
distant
pixels,
leading
incomplete
feature
extraction.
On
hand,
large
convolution
kernels
attention
mechanisms
use
an
excessive
features,
resulting
in
insufficient
accuracy
features.
To
solve
this
problem,
we
introduce
Deformable
Network
(DCN),
which
adaptively
changes
position
sampling
point
using
offsets,
so
as
pixels.
addition,
order
make
offset
DCN
more
further
improve
also
propose
Multi-Maximum-Offsets
Fusion
(MMOF-DCN).
MMOF-DCN
reduce
exploration
range
finding
desired
offset,
thereby
improving
efficiency.
Experiment
results
show
our
proposed
has
advantages
real-world
dataset
synthetic
dataset.
The
PSNR
value
have
disparity
improved
0.45
dB
compared
methods.
Language: Английский
High-Speed 3D Imaging
Zijun Ouyang,
No information about this author
Xuge Zhang,
No information about this author
Yutong Li
No information about this author
et al.
Published: Jan. 1, 2025
Language: Английский
Dynamic Spectral fluorescence microscopy via Event-based & CMOS image-sensor fusion
Robert D. Baird,
No information about this author
Apratim Majumder,
No information about this author
Rajesh Menon
No information about this author
et al.
Optics Express,
Journal Year:
2024,
Volume and Issue:
33(2), P. 2169 - 2169
Published: Dec. 23, 2024
We
present
a
widefield
fluorescence
microscope
that
integrates
an
event-based
image
sensor
(EBIS)
with
CMOS
(CIS)
for
ultra-fast
microscopy
spectral
distinction
capabilities.
The
EBIS
achieves
temporal
resolution
of
∼10
μ
s
(∼
100,000
frames/s),
while
the
CIS
provides
diffraction-limited
spatial
resolution.
A
diffractive
optical
element
encodes
information
into
diffractogram,
which
is
recorded
by
CIS.
diffractogram
processed
using
deep
neural
network
to
resolve
two
beads,
whose
emission
peaks
are
separated
only
7
nm
and
exhibit
88%
overlap.
validate
our
imaging
capillary
flow
fluorescent
demonstrating
significant
advancement
in
microscopy.
This
technique
holds
broad
potential
elucidating
foundational
dynamic
biological
processes.
Language: Английский
Event-based Single Molecule Localization Microscopy (eventSMLM) for High Spatio-Temporal Super-resolution Imaging
Jigmi Basumatary,
No information about this author
S Aravinth,
No information about this author
Neeraj Pant
No information about this author
et al.
bioRxiv (Cold Spring Harbor Laboratory),
Journal Year:
2023,
Volume and Issue:
unknown
Published: Dec. 30, 2023
Photon
emission
by
single
molecules
is
a
random
event
with
well-defined
distribution.
This
calls
for
event-based
detection
in
single-molecule
localization
microscopy.
The
detector
has
the
advantage
of
providing
temporal
change
photons
and
characteristics
within
blinking
period
(typically,
∼
30
ms
)
molecule.
information
can
be
used
to
better
localize
user-defined
collection
time
(shorter
than
average
time)
detector.
events
collected
over
every
short
interval
/
(∼
3
give
rise
several
independent
photon
distributions
(
tPSFs
experiment
showed
that
intermittently
emit
photons.
So,
capturing
shorter
entire
gives
realizations
PSFs
Specifically,
this
translates
sparse
active
pixels
per
frame
on
chip
(image
plane).
Ideally,
multiple
tPSF
position
estimates
single-molecules,
leading
centroids.
Fitting
these
centroid
points
circle
provides
an
approximate
(circle
center)
geometric
precision
(determined
FWHM
Gaussian)
Since
estimate
(position
precision)
directly
driven
data
(photon
pixels)
recorded
,
estimated
value
purely
experimental
rather
theoretical
(Thomson’s
formula).
Moreover,
nature
camera
substantially
reduces
noise
background
low-noise
environment.
method
tested
three
different
test
samples
(1)
Scattered
Cy3
dye
coverslip,
(2)
Mitochondrial
network
cell,
(3)
Dendra2HA
transfected
live
NIH3T3
cells
(Influenza-A
model).
A
super-resolution
map
constructed
analyzed
based
(temporal
number
photons).
Experimental
results
show
10
nm
which
6
fold
standard
SMLM.
imaging
HA
clustering
cellular
environment
reveals
spatio-temporal
PArticle
Resolution
(PAR)
(2.3
l
p
×
τ
14.11
par
where
1
=
−11
meter
.
second
However,
brighter
probes
(such
as
Cy3)
are
capable
3.16
Cluster
analysis
shows
>
81%
colocalization
SMLM,
indicating
consistency
proposed
eventSMLM
technique.
dynamics
(migration,
association,
dissociation)
clusters
first
60
minutes.
With
availability
high
resolution,
we
envision
emergence
new
kind
microscopy
particle
resolution
sub-10
regime.
Language: Английский
Model-Based Explainable Deep Learning for Light-Field Microscopy Imaging
IEEE Transactions on Image Processing,
Journal Year:
2024,
Volume and Issue:
33, P. 3059 - 3074
Published: Jan. 1, 2024
In
modern
neuroscience,
observing
the
dynamics
of
large
populations
neurons
is
a
critical
step
understanding
how
networks
process
information.
Light-field
microscopy
(LFM)
has
emerged
as
type
scanless,
high-speed,
three-dimensional
(3D)
imaging
tool,
particularly
attractive
for
this
purpose.
Imaging
neuronal
activity
using
LFM
calls
development
novel
computational
approaches
that
fully
exploit
domain
knowledge
embedded
in
physics
and
optics
models,
well
enabling
high
interpretability
transparency.
To
end,
we
propose
model-based
explainable
deep
learning
approach
LFM.
Different
from
purely
data-driven
methods,
proposed
integrates
wave-optics
theory,
sparse
representation
non-linear
optimization
with
artificial
neural
network.
particular,
architecture
network
designed
following
precise
signal
models.
Moreover,
network's
parameters
are
learned
training
dataset
strategy
layer-wise
tailored
distillation.
Such
design
allows
to
take
advantage
new
features.
It
combines
benefit
both
learning-based
thereby
contributing
superior
interpretability,
transparency
performance.
By
evaluating
on
structural
functional
data
obtained
scattering
mammalian
brain
tissues,
demonstrate
capabilities
achieve
fast,
robust
3D
localization
neuron
sources
accurate
identification.
Language: Английский
CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
Journal Year:
2024,
Volume and Issue:
17, P. 25265 - 25275
Published: June 16, 2024
Language: Английский
光场表征及其分辨率提升技术:文献综述及最新进展(特邀)
张润南 ZHANG Runnan,
No information about this author
周宁 ZHOU Ning,
No information about this author
周子豪 ZHOU Zihao
No information about this author
et al.
Infrared and Laser Engineering,
Journal Year:
2024,
Volume and Issue:
53(9), P. 20240347 - 20240347
Published: Jan. 1, 2024
Ultra-fast light-field microscopy with event detection
Light Science & Applications,
Journal Year:
2024,
Volume and Issue:
13(1)
Published: Nov. 7, 2024
The
event
detection
technique
has
been
introduced
to
light-field
microscopy,
boosting
its
imaging
speed
in
orders
of
magnitude
with
simultaneous
axial
resolution
enhancement
scattering
medium.
Language: Английский