Data-driven,
deep-learning
modeling
frameworks
have
been
recently
developed
for
forecasting
time
series
data.
Such
machine
learning
models
may
be
useful
in
multiple
domains
including
the
atmospheric
and
oceanic
ones,
general,
larger
fluids
community.
The
present
work
investigates
possible
effectiveness
of
such
deep
neural
operator
reproducing
predicting
classic
fluid
flows
simulations
realistic
ocean
dynamics.
We
first
briefly
evaluate
capabilities
when
trained
on
a
simulated
two-dimensional
flow
past
cylinder.
then
investigate
their
application
to
surface
circulation
Middle
Atlantic
Bight
Massachusetts
Bay,
from
high-resolution
data-assimilative
employed
real
sea
experiments.
confirm
that
are
capable
idealized
periodic
eddy
shedding.
For
our
preliminary
study,
they
can
predict
several
features
show
some
skill,
providing
potential
future
research
applications.
Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: Sept. 12, 2024
Learning
operators
with
deep
neural
networks
is
an
emerging
paradigm
for
scientific
computing.
Deep
Operator
Network
(DeepONet)
a
modular
operator
learning
framework
that
allows
flexibility
in
choosing
the
kind
of
network
to
be
used
trunk
and/or
branch
DeepONet.
This
beneficial
as
it
has
been
shown
many
times
different
types
problems
require
kinds
architectures
effective
learning.
In
this
work,
we
design
efficient
based
on
DeepONet
architecture.
We
introduce
U-Net
enhanced
(U-DeepONet)
solution
highly
complex
CO
Scientific Reports,
Journal Year:
2024,
Volume and Issue:
14(1)
Published: Oct. 11, 2024
This
research
introduces
an
accelerated
training
approach
for
Vanilla
Physics-Informed
Neural
Networks
(PINNs)
that
addresses
three
factors
affecting
the
loss
function:
initial
weight
state
of
neural
network,
ratio
domain
to
boundary
points,
and
weighting
factor.
The
proposed
method
involves
two
phases.
In
phase,
a
unique
function
is
created
using
subset
conditions
partial
differential
equation
terms.
Furthermore,
we
introduce
preprocessing
procedures
aim
decrease
variance
during
initialization
choose
points
according
various
networks.
second
phase
resembles
Vanilla-PINN
training,
but
portion
random
weights
are
substituted
with
from
first
phase.
implies
network's
structure
designed
prioritize
conditions,
subsequently
overall
convergence.
study
evaluates
benchmarks:
two-dimensional
flow
over
cylinder,
inverse
problem
inlet
velocity
determination,
Burger
equation.
Incorporating
generated
in
neutralizes
imbalance
effects.
Notably,
outperforms
terms
speed,
convergence
likelihood
eliminates
need
hyperparameter
tuning
balance
function.
Physics of Fluids,
Journal Year:
2024,
Volume and Issue:
36(1)
Published: Jan. 1, 2024
Variable-fidelity
surrogate
models
leverage
low-fidelity
data
with
low
cost
to
assist
in
constructing
high-precision
models,
thereby
improving
modeling
efficiency.
However,
traditional
machine
learning
methods
require
high
correlation
between
low-precision
and
data.
To
address
this
issue,
a
variable-fidelity
deep
neural
network
model
based
on
transfer
(VDNN-TL)
is
proposed.
VDNN-TL
selects
retains
information
encapsulated
different
fidelity
through
layers,
reducing
the
model's
demand
for
enhancing
robustness.
Two
case
studies
are
used
simulate
scenarios
poor
correlation,
predictive
accuracy
of
compared
that
(e.g.,
Kriging
Co-Kriging).
The
obtained
results
demonstrate
that,
under
same
cost,
achieves
higher
accuracy.
Furthermore,
waverider
shape
multidisciplinary
design
optimization
practice,
application
improves
efficiency
by
98.9%.
After
optimization,
lift-to-drag
ratio
increases
7.86%,
volume
26.2%.
Moreover,
performance
evaluation
error
both
initial
optimized
configurations
less
than
2%,
further
validating
effectiveness
VDNN-TL.
Physical Review Fluids,
Journal Year:
2024,
Volume and Issue:
9(8)
Published: Aug. 12, 2024
The
implicit
U-Net
enhanced
Fourier
neural
operator
(IUFNO)
combines
the
loop
structure
of
FNO
(IFNO)
with
U-Net,
leading
to
long-term
predictive
ability
in
large-eddy
simulations
(LES)
turbulent
channel
flow.
It
is
found
that
IUFNO
outperforms
traditional
dynamic
Smagorinsky
model
(DSM)
and
wall-adapted
local
eddy-viscosity
(WALE)
at
coarse
LES
grids.
predictions
both
mean
fluctuating
quantities
by
are
closer
filtered
direct
numerical
simulation
(fDNS)
benchmark
compared
models,
while
computational
cost
much
lower.
Proceedings of the National Academy of Sciences,
Journal Year:
2023,
Volume and Issue:
120(39)
Published: Sept. 19, 2023
This
paper
introduces
the
paradigm
of
“in-context
operator
learning”
and
corresponding
model
“In-Context
Operator
Networks”
to
simultaneously
learn
operators
from
prompted
data
apply
it
new
questions
during
inference
stage,
without
any
weight
update.
Existing
methods
are
limited
using
a
neural
network
approximate
specific
equation
solution
or
operator,
requiring
retraining
when
switching
problem
with
different
equations.
By
training
single
as
an
learner,
rather
than
solution/operator
approximator,
we
can
not
only
get
rid
(even
fine-tuning)
for
problems
but
also
leverage
commonalities
shared
across
so
that
few
examples
in
prompt
needed
learning
operator.
Our
numerical
results
show
capability
few-shot
learner
diversified
type
differential
problems,
including
forward
inverse
ordinary
equations,
partial
mean-field
control
generalize
its
beyond
distribution.
Advances in Aerodynamics,
Journal Year:
2025,
Volume and Issue:
7(1)
Published: Jan. 9, 2025
Abstract
Building
accurate
and
generalizable
machine-learning
models
requires
large
training
datasets.
In
aerodynamics,
quantities
of
interest
are
typically
governed
by
complex,
non-linear
mechanisms
in
which
neural
networks
well-suited
to
address.
However,
the
acquisition
large,
high-fidelity
datasets
from
either
simulations
or
experiments
can
be
expensive.
this
work,
a
transfer-learning
framework
is
explored
reduce
reliance
on
these
expensive
exploiting
cost-effectiveness
low-fidelity
analyses
constructing
extensive
datasets,
such
as
inviscid
panel
method.
By
first
developing
robust
base
distributions,
target
“learn”
simply
transferring
relevant
embedded
features
facilitate
modelling
instead
solely
relying
its
access
samples.
Assessment
reveals
performance
gains
over
conventional
schemes
(1)
fidelity
enhancement
pressure
distributions;
(2)
generalizing
prior
knowledge
learn
adjacent
skin
friction
properties
even
without
equivalent;
(3)
extrapolation
yet-to-be
seen
operating
conditions.
Under
conditions
limited
samples,
test
MSE
evaluations
improved
magnitudes
up
10
2
,
1
for
three
respective
tasks.
As
such,
findings
motivate
further
investigations
support
data-scarce
surrogate
more
empirical
settings.