Skip to content
$$ \def\bm#1{\boldsymbol{#1}} %%%%% NEW MATH DEFINITIONS %%%%% % % Mark sections of captions for referring to divisions of figures % \newcommand{\figleft}{{\em (Left)}} % \newcommand{\figcenter}{{\em (Center)}} % \newcommand{\figright}{{\em (Right)}} % \newcommand{\figtop}{{\em (Top)}} % \newcommand{\figbottom}{{\em (Bottom)}} % \newcommand{\captiona}{{\em (a)}} % \newcommand{\captionb}{{\em (b)}} % \newcommand{\captionc}{{\em (c)}} % \newcommand{\captiond}{{\em (d)}} % Highlight a newly defined term \newcommand{\newterm}[1]{{\bf #1}} % % Figure reference, lower-case. % \def\figref#1{figure~\ref{#1}} % % Figure reference, capital. For start of sentence % \def\Figref#1{Figure~\ref{#1}} % \def\twofigref#1#2{figures \ref{#1} and \ref{#2}} % \def\quadfigref#1#2#3#4{figures \ref{#1}, \ref{#2}, \ref{#3} and \ref{#4}} % % Section reference, lower-case. % \def\secref#1{section~\ref{#1}} % % Section reference, capital. % \def\Secref#1{Section~\ref{#1}} % % Reference to two sections. % \def\twosecrefs#1#2{sections \ref{#1} and \ref{#2}} % % Reference to three sections. % \def\secrefs#1#2#3{sections \ref{#1}, \ref{#2} and \ref{#3}} % % Reference to an equation, lower-case. % \def\eqref#1{equation~\ref{#1}} % % Reference to an equation, upper case % \def\Eqref#1{Equation~\ref{#1}} % % A raw reference to an equation---avoid using if possible % \def\plaineqref#1{\ref{#1}} % % Reference to a chapter, lower-case. % \def\chapref#1{chapter~\ref{#1}} % % Reference to an equation, upper case. % \def\Chapref#1{Chapter~\ref{#1}} % % Reference to a range of chapters % \def\rangechapref#1#2{chapters\ref{#1}--\ref{#2}} % % Reference to an algorithm, lower-case. % \def\algref#1{algorithm~\ref{#1}} % % Reference to an algorithm, upper case. % \def\Algref#1{Algorithm~\ref{#1}} % \def\twoalgref#1#2{algorithms \ref{#1} and \ref{#2}} % \def\Twoalgref#1#2{Algorithms \ref{#1} and \ref{#2}} % % Reference to a part, lower case % \def\partref#1{part~\ref{#1}} % % Reference to a part, upper case % \def\Partref#1{Part~\ref{#1}} % \def\twopartref#1#2{parts \ref{#1} and \ref{#2}} \def\ceil#1{\lceil #1 \rceil} \def\floor#1{\lfloor #1 \rfloor} \def\1{\bm{1}} \newcommand{\train}{\mathcal{D}} \newcommand{\valid}{\mathcal{D_{\mathrm{valid}}}} \newcommand{\test}{\mathcal{D_{\mathrm{test}}}} \def\eps{{\epsilon}} % Random variables \def\reta{{\textnormal{$\eta$}}} \def\ra{{\textnormal{a}}} \def\rb{{\textnormal{b}}} \def\rc{{\textnormal{c}}} \def\rd{{\textnormal{d}}} \def\re{{\textnormal{e}}} \def\rf{{\textnormal{f}}} \def\rg{{\textnormal{g}}} \def\rh{{\textnormal{h}}} \def\ri{{\textnormal{i}}} \def\rj{{\textnormal{j}}} \def\rk{{\textnormal{k}}} \def\rl{{\textnormal{l}}} % rm is already a command, just don't name any random variables m \def\rn{{\textnormal{n}}} \def\ro{{\textnormal{o}}} \def\rp{{\textnormal{p}}} \def\rq{{\textnormal{q}}} \def\rr{{\textnormal{r}}} \def\rs{{\textnormal{s}}} \def\rt{{\textnormal{t}}} \def\ru{{\textnormal{u}}} \def\rv{{\textnormal{v}}} \def\rw{{\textnormal{w}}} \def\rx{{\textnormal{x}}} \def\ry{{\textnormal{y}}} \def\rz{{\textnormal{z}}} % Random vectors \def\rvepsilon{{\mathbf{\epsilon}}} \def\rvtheta{{\mathbf{\theta}}} \def\rva{{\mathbf{a}}} \def\rvb{{\mathbf{b}}} \def\rvc{{\mathbf{c}}} \def\rvd{{\mathbf{d}}} \def\rve{{\mathbf{e}}} \def\rvf{{\mathbf{f}}} \def\rvg{{\mathbf{g}}} \def\rvh{{\mathbf{h}}} \def\rvi{{\mathbf{i}}} \def\rvj{{\mathbf{j}}} \def\rvk{{\mathbf{k}}} \def\rvl{{\mathbf{l}}} \def\rvm{{\mathbf{m}}} \def\rvn{{\mathbf{n}}} \def\rvo{{\mathbf{o}}} \def\rvp{{\mathbf{p}}} \def\rvq{{\mathbf{q}}} \def\rvr{{\mathbf{r}}} \def\rvs{{\mathbf{s}}} \def\rvt{{\mathbf{t}}} \def\rvu{{\mathbf{u}}} \def\rvv{{\mathbf{v}}} \def\rvw{{\mathbf{w}}} \def\rvx{{\mathbf{x}}} \def\rvy{{\mathbf{y}}} \def\rvz{{\mathbf{z}}} % Elements of random vectors \def\erva{{\textnormal{a}}} \def\ervb{{\textnormal{b}}} \def\ervc{{\textnormal{c}}} \def\ervd{{\textnormal{d}}} \def\erve{{\textnormal{e}}} \def\ervf{{\textnormal{f}}} \def\ervg{{\textnormal{g}}} \def\ervh{{\textnormal{h}}} \def\ervi{{\textnormal{i}}} \def\ervj{{\textnormal{j}}} \def\ervk{{\textnormal{k}}} \def\ervl{{\textnormal{l}}} \def\ervm{{\textnormal{m}}} \def\ervn{{\textnormal{n}}} \def\ervo{{\textnormal{o}}} \def\ervp{{\textnormal{p}}} \def\ervq{{\textnormal{q}}} \def\ervr{{\textnormal{r}}} \def\ervs{{\textnormal{s}}} \def\ervt{{\textnormal{t}}} \def\ervu{{\textnormal{u}}} \def\ervv{{\textnormal{v}}} \def\ervw{{\textnormal{w}}} \def\ervx{{\textnormal{x}}} \def\ervy{{\textnormal{y}}} \def\ervz{{\textnormal{z}}} % Random matrices \def\rmA{{\mathbf{A}}} \def\rmB{{\mathbf{B}}} \def\rmC{{\mathbf{C}}} \def\rmD{{\mathbf{D}}} \def\rmE{{\mathbf{E}}} \def\rmF{{\mathbf{F}}} \def\rmG{{\mathbf{G}}} \def\rmH{{\mathbf{H}}} \def\rmI{{\mathbf{I}}} \def\rmJ{{\mathbf{J}}} \def\rmK{{\mathbf{K}}} \def\rmL{{\mathbf{L}}} \def\rmM{{\mathbf{M}}} \def\rmN{{\mathbf{N}}} \def\rmO{{\mathbf{O}}} \def\rmP{{\mathbf{P}}} \def\rmQ{{\mathbf{Q}}} \def\rmR{{\mathbf{R}}} \def\rmS{{\mathbf{S}}} \def\rmT{{\mathbf{T}}} \def\rmU{{\mathbf{U}}} \def\rmV{{\mathbf{V}}} \def\rmW{{\mathbf{W}}} \def\rmX{{\mathbf{X}}} \def\rmY{{\mathbf{Y}}} \def\rmZ{{\mathbf{Z}}} % Elements of random matrices \def\ermA{{\textnormal{A}}} \def\ermB{{\textnormal{B}}} \def\ermC{{\textnormal{C}}} \def\ermD{{\textnormal{D}}} \def\ermE{{\textnormal{E}}} \def\ermF{{\textnormal{F}}} \def\ermG{{\textnormal{G}}} \def\ermH{{\textnormal{H}}} \def\ermI{{\textnormal{I}}} \def\ermJ{{\textnormal{J}}} \def\ermK{{\textnormal{K}}} \def\ermL{{\textnormal{L}}} \def\ermM{{\textnormal{M}}} \def\ermN{{\textnormal{N}}} \def\ermO{{\textnormal{O}}} \def\ermP{{\textnormal{P}}} \def\ermQ{{\textnormal{Q}}} \def\ermR{{\textnormal{R}}} \def\ermS{{\textnormal{S}}} \def\ermT{{\textnormal{T}}} \def\ermU{{\textnormal{U}}} \def\ermV{{\textnormal{V}}} \def\ermW{{\textnormal{W}}} \def\ermX{{\textnormal{X}}} \def\ermY{{\textnormal{Y}}} \def\ermZ{{\textnormal{Z}}} % Vectors \def\vzero{{\bm{0}}} \def\vone{{\bm{1}}} \def\vmu{{\bm{\mu}}} \def\vtheta{{\bm{\theta}}} \def\va{{\bm{a}}} \def\vb{{\bm{b}}} \def\vc{{\bm{c}}} \def\vd{{\bm{d}}} \def\ve{{\bm{e}}} \def\vf{{\bm{f}}} \def\vg{{\bm{g}}} \def\vh{{\bm{h}}} \def\vi{{\bm{i}}} \def\vj{{\bm{j}}} \def\vk{{\bm{k}}} \def\vl{{\bm{l}}} \def\vm{{\bm{m}}} \def\vn{{\bm{n}}} \def\vo{{\bm{o}}} \def\vp{{\bm{p}}} \def\vq{{\bm{q}}} \def\vr{{\bm{r}}} \def\vs{{\bm{s}}} \def\vt{{\bm{t}}} \def\vu{{\bm{u}}} \def\vv{{\bm{v}}} \def\vw{{\bm{w}}} \def\vx{{\bm{x}}} \def\vy{{\bm{y}}} \def\vz{{\bm{z}}} % Elements of vectors \def\evalpha{{\alpha}} \def\evbeta{{\beta}} \def\evepsilon{{\epsilon}} \def\evlambda{{\lambda}} \def\evomega{{\omega}} \def\evmu{{\mu}} \def\evpsi{{\psi}} \def\evsigma{{\sigma}} \def\evtheta{{\theta}} \def\eva{{a}} \def\evb{{b}} \def\evc{{c}} \def\evd{{d}} \def\eve{{e}} \def\evf{{f}} \def\evg{{g}} \def\evh{{h}} \def\evi{{i}} \def\evj{{j}} \def\evk{{k}} \def\evl{{l}} \def\evm{{m}} \def\evn{{n}} \def\evo{{o}} \def\evp{{p}} \def\evq{{q}} \def\evr{{r}} \def\evs{{s}} \def\evt{{t}} \def\evu{{u}} \def\evv{{v}} \def\evw{{w}} \def\evx{{x}} \def\evy{{y}} \def\evz{{z}} % Matrix \def\mA{{\bm{A}}} \def\mB{{\bm{B}}} \def\mC{{\bm{C}}} \def\mD{{\bm{D}}} \def\mE{{\bm{E}}} \def\mF{{\bm{F}}} \def\mG{{\bm{G}}} \def\mH{{\bm{H}}} \def\mI{{\bm{I}}} \def\mJ{{\bm{J}}} \def\mK{{\bm{K}}} \def\mL{{\bm{L}}} \def\mM{{\bm{M}}} \def\mN{{\bm{N}}} \def\mO{{\bm{O}}} \def\mP{{\bm{P}}} \def\mQ{{\bm{Q}}} \def\mR{{\bm{R}}} \def\mS{{\bm{S}}} \def\mT{{\bm{T}}} \def\mU{{\bm{U}}} \def\mV{{\bm{V}}} \def\mW{{\bm{W}}} \def\mX{{\bm{X}}} \def\mY{{\bm{Y}}} \def\mZ{{\bm{Z}}} \def\mBeta{{\bm{\beta}}} \def\mPhi{{\bm{\Phi}}} \def\mLambda{{\bm{\Lambda}}} \def\mSigma{{\bm{\Sigma}}} % Tensor \newcommand{\tens}[1]{\mathsf{#1}} \def\tA{{\tens{A}}} \def\tB{{\tens{B}}} \def\tC{{\tens{C}}} \def\tD{{\tens{D}}} \def\tE{{\tens{E}}} \def\tF{{\tens{F}}} \def\tG{{\tens{G}}} \def\tH{{\tens{H}}} \def\tI{{\tens{I}}} \def\tJ{{\tens{J}}} \def\tK{{\tens{K}}} \def\tL{{\tens{L}}} \def\tM{{\tens{M}}} \def\tN{{\tens{N}}} \def\tO{{\tens{O}}} \def\tP{{\tens{P}}} \def\tQ{{\tens{Q}}} \def\tR{{\tens{R}}} \def\tS{{\tens{S}}} \def\tT{{\tens{T}}} \def\tU{{\tens{U}}} \def\tV{{\tens{V}}} \def\tW{{\tens{W}}} \def\tX{{\tens{X}}} \def\tY{{\tens{Y}}} \def\tZ{{\tens{Z}}} % Graph \def\gA{{\mathcal{A}}} \def\gB{{\mathcal{B}}} \def\gC{{\mathcal{C}}} \def\gD{{\mathcal{D}}} \def\gE{{\mathcal{E}}} \def\gF{{\mathcal{F}}} \def\gG{{\mathcal{G}}} \def\gH{{\mathcal{H}}} \def\gI{{\mathcal{I}}} \def\gJ{{\mathcal{J}}} \def\gK{{\mathcal{K}}} \def\gL{{\mathcal{L}}} \def\gM{{\mathcal{M}}} \def\gN{{\mathcal{N}}} \def\gO{{\mathcal{O}}} \def\gP{{\mathcal{P}}} \def\gQ{{\mathcal{Q}}} \def\gR{{\mathcal{R}}} \def\gS{{\mathcal{S}}} \def\gT{{\mathcal{T}}} \def\gU{{\mathcal{U}}} \def\gV{{\mathcal{V}}} \def\gW{{\mathcal{W}}} \def\gX{{\mathcal{X}}} \def\gY{{\mathcal{Y}}} \def\gZ{{\mathcal{Z}}} % Sets \def\sA{{\mathbb{A}}} \def\sB{{\mathbb{B}}} \def\sC{{\mathbb{C}}} \def\sD{{\mathbb{D}}} % Don't use a set called E, because this would be the same as our symbol % for expectation. \def\sF{{\mathbb{F}}} \def\sG{{\mathbb{G}}} \def\sH{{\mathbb{H}}} \def\sI{{\mathbb{I}}} \def\sJ{{\mathbb{J}}} \def\sK{{\mathbb{K}}} \def\sL{{\mathbb{L}}} \def\sM{{\mathbb{M}}} \def\sN{{\mathbb{N}}} \def\sO{{\mathbb{O}}} \def\sP{{\mathbb{P}}} \def\sQ{{\mathbb{Q}}} \def\sR{{\mathbb{R}}} \def\sS{{\mathbb{S}}} \def\sT{{\mathbb{T}}} \def\sU{{\mathbb{U}}} \def\sV{{\mathbb{V}}} \def\sW{{\mathbb{W}}} \def\sX{{\mathbb{X}}} \def\sY{{\mathbb{Y}}} \def\sZ{{\mathbb{Z}}} % Entries of a matrix \def\emLambda{{\Lambda}} \def\emA{{A}} \def\emB{{B}} \def\emC{{C}} \def\emD{{D}} \def\emE{{E}} \def\emF{{F}} \def\emG{{G}} \def\emH{{H}} \def\emI{{I}} \def\emJ{{J}} \def\emK{{K}} \def\emL{{L}} \def\emM{{M}} \def\emN{{N}} \def\emO{{O}} \def\emP{{P}} \def\emQ{{Q}} \def\emR{{R}} \def\emS{{S}} \def\emT{{T}} \def\emU{{U}} \def\emV{{V}} \def\emW{{W}} \def\emX{{X}} \def\emY{{Y}} \def\emZ{{Z}} \def\emSigma{{\Sigma}} % entries of a tensor % Same font as tensor, without \bm wrapper \newcommand{\etens}[1]{\mathsfit{#1}} \def\etLambda{{\etens{\Lambda}}} \def\etA{{\etens{A}}} \def\etB{{\etens{B}}} \def\etC{{\etens{C}}} \def\etD{{\etens{D}}} \def\etE{{\etens{E}}} \def\etF{{\etens{F}}} \def\etG{{\etens{G}}} \def\etH{{\etens{H}}} \def\etI{{\etens{I}}} \def\etJ{{\etens{J}}} \def\etK{{\etens{K}}} \def\etL{{\etens{L}}} \def\etM{{\etens{M}}} \def\etN{{\etens{N}}} \def\etO{{\etens{O}}} \def\etP{{\etens{P}}} \def\etQ{{\etens{Q}}} \def\etR{{\etens{R}}} \def\etS{{\etens{S}}} \def\etT{{\etens{T}}} \def\etU{{\etens{U}}} \def\etV{{\etens{V}}} \def\etW{{\etens{W}}} \def\etX{{\etens{X}}} \def\etY{{\etens{Y}}} \def\etZ{{\etens{Z}}} % The true underlying data generating distribution \newcommand{\pdata}{p_{\rm{data}}} % The empirical distribution defined by the training set \newcommand{\ptrain}{\hat{p}_{\rm{data}}} \newcommand{\Ptrain}{\hat{P}_{\rm{data}}} % The model distribution \newcommand{\pmodel}{p_{\rm{model}}} \newcommand{\Pmodel}{P_{\rm{model}}} \newcommand{\ptildemodel}{\tilde{p}_{\rm{model}}} % Stochastic autoencoder distributions \newcommand{\pencode}{p_{\rm{encoder}}} \newcommand{\pdecode}{p_{\rm{decoder}}} \newcommand{\precons}{p_{\rm{reconstruct}}} \newcommand{\laplace}{\mathrm{Laplace}} % Laplace distribution \newcommand{\E}{\mathbb{E}} \newcommand{\Ls}{\mathcal{L}} \newcommand{\R}{\mathbb{R}} \newcommand{\emp}{\tilde{p}} \newcommand{\lr}{\alpha} \newcommand{\reg}{\lambda} \newcommand{\rect}{\mathrm{rectifier}} \newcommand{\softmax}{\mathrm{softmax}} \newcommand{\sigmoid}{\sigma} \newcommand{\softplus}{\zeta} \newcommand{\KL}{D_{\mathrm{KL}}} \newcommand{\Var}{\mathrm{Var}} \newcommand{\standarderror}{\mathrm{SE}} \newcommand{\Cov}{\mathrm{Cov}} % Wolfram Mathworld says $L^2$ is for function spaces and $\ell^2$ is for vectors % But then they seem to use $L^2$ for vectors throughout the site, and so does % wikipedia. \newcommand{\normlzero}{L^0} \newcommand{\normlone}{L^1} \newcommand{\normltwo}{L^2} \newcommand{\normlp}{L^p} \newcommand{\normmax}{L^\infty} \newcommand{\parents}{Pa} % See usage in notation.tex. Chosen to match Daphne's book. \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\Tr}{Tr} \let\ab\allowbreak $$
$$ \def\vtau{{\bm{\tau}}} \def\norm#1{\left\lVert #1 \right\rVert} $$
$$ % MathJax Tooltip support % Ref: https://docs.mathjax.org/en/latest/input/tex/extensions/html.html \newcommand{\tip}[3][blue]{\class{mathjax-tooltip-symbol mathjax-tooltip-color-#1 mathjax-tooltip-tip-#2}{#3}} $$

Neural Radiance Field (NeRF)

Goal: Represent a continuous 3D scene as a fully-connected neural network and synthesize novel views of the scene.

Contribution: Presents the first continuous neural scene representation that is able to render high-resolution photorealistic novel views of real objects and scenes from RGB images captured in natural settings.

Concept

$$ \def\F{ {\tip[blue]{F}{F_\Theta} } } $$
5D neural radiance field $F_\Theta$ parameterized by $\Theta$
$$ \def\x{ {\tip[blue]{x}{\vx} } } $$
location $\vx=[x,y,z]\in\R^3$
$$ \def\xyz{ {\tip[blue]{x}{[x,y,z]} } } $$
$$ \def\d{ {\tip[blue]{d}{\vd} } } $$
viewing direction as a unit vector $\vd\in\R^3$,
can be represented as angles $[\theta, \phi]\in\R^2$
$$ \def\thetaphi{ {\tip[blue]{thetaphi}{[\theta, \phi]} } } $$
viewing direction as angles $[\theta, \phi]\in\R^2$,
can be represented as a unit vector $\vd\in\R^3$
$$ \def\c{ {\tip[blue]{c}{\vc} } } $$
emitted color $\vc=[r,g,b]\in\R^3$,
can be written as a function $\vc(\vx,\vd)$
$$ \def\rgb{ {\tip[blue]{c}{[r,g,b]} } } $$
$$ \def\density{ {\tip[blue]{density}{\sigma} } } $$
volume density $\sigma\in\R$,
can be written as a function $\sigma(\vx)$,
can be reparameterized as $\sigma(t)=\sigma(\vr(t))$,
$\sigma(t)dt=\Pr(\text{hit at }t)$
$$ \def\r{ {\tip[blue]{r}{\vr} } } $$
camera ray $\vr(t)=\vo+t\vd$
$$ \def\o{ {\tip[blue]{o}{\vo} } } $$
origin $\vo\in\R^3$ of $\vr(t)$
$$ \def\t{ {\tip[blue]{t}{t} } } $$
timestep $t\in[t_\mathrm{near},t_\mathrm{far}]$ along $\vr(t)$
$$ \def\tnear{ {\tip[blue]{tnear}{t_\mathrm{near}} } } $$
near bound $t_\mathrm{near}$ of $\vr(t)$
$$ \def\tfar{ {\tip[blue]{tfar}{t_\mathrm{far}} } } $$
far bound $t_\mathrm{far}$ of $\vr(t)$
$$ \def\C{ {\tip[blue]{C}{C} } } $$
expected color $C(\vr)$ of a ray
$$ \def\T{ {\tip[blue]{T}{T} } } $$
accumulated transmittance $T(t)$ along $\vr(t_\mathrm{near} \rightarrow t),$
$T(t)=T(t_\mathrm{near} \rightarrow t)=\Pr(\text{no hit before }t)$
$$ \def\Ccoarse{ {\tip[blue]{Ccoarse}{\hat{C}_c} } } $$
expected color $\hat{C}_c(\cdot\,;\,\Theta_c)$ of the coarse network $F_{\Theta_c}$
$$ \def\Cfine{ {\tip[blue]{Cfine}{\hat{C}_f} } } $$
expected color $\hat{C}_f(\cdot\,;\,\Theta_f)$ of the fine network $F_{\Theta_f}$
Synthesize novel views from a set of input images, from Figure 1 of Mildenhall et al., 2020.

An overview of the NeRF scene representation and differentiable rendering procedure, from Figure 2 of Mildenhall et al., 2020.

1. Parameterizing the NeRF

The MLP network \(\F:(\xyz, \thetaphi)\to(\rgb,\density)\) is a 5D neural radiance field that represents a continuous scene as the volume density and directional emitted radiance at any point in space.

  • Input a 3D location \(\x=\xyz\), and a 2D viewing direction \(\thetaphi\) which is converted to a 3D Cartesian unit vector \(\d\) during rendering.
  • Output an emitted color \(\c=\rgb\), and a volume density \(\density\).
  • \(\density\) should not be dependent of the viewing direction \(\d\), so as to achieve multiview consistency.

A differentiable algorithm is needed to synthesize a novel view (a RGB image) from \(\F\) for training/inference.

2. Volume Rendering

Perform (differentiable) volume rendering based on \(\F\) to render novel views as 2D projections.

  • Surface Rendering vs. Volume Rendering
    • Surface Rendering: Loop through geometric surfaces and check for ray hits.
    • Volume Rendering: Loop through ray points and query geometry.
  • The volume density \(\density(\x)\) can be interpreted as the differential probability of a ray terminating at an infinitesimal particle at location \(\x\).
    • Probabilistic modeling enables the representation of translucent or transparent objects.
  • Given a camera ray \(\r(\t)=\o+\t\d\) (starting from its origin \(\o\)) with near and far bounds \(\tnear\) and \(\tfar\), its expected color \(\C(\r)\) is: $$ \C(\r) = \int_{\tnear}^{\tfar}\T(\t)\density(\t)\c(\t,\d)dt, $$ where \(\T(\t)=\exp\left(-\int_{\tnear}^{\t}\density(s)ds\right)\), \(\density(\t)=\density(\r(\t))\), and \(\c(\t,\d)=\c(\r(\t),\d)\).

    Volumetric formulation for NeRF, from p.98 of UC Berkeley CS194-26, Neural Radiance Fields 1.

    • \(\Pr(\text{no hit before }\t) = \T(\t)\) denotes the accumulated transmittance along the ray from \(\tnear\) to \(\t\), i.e., the probability that the ray travels from \(\tnear\) to \(\t\) without hitting any other particle.
    • \(\Pr(\text{hit at }\t) = \density(\t)dt\) denotes the probability that the ray hits a particle in a small interval around \(\t\).
    • Since \(\Pr(\text{first hit at }\t) = \Pr(\text{no hit before }\t)\times\Pr(\text{hit at }\t) = \T(\t)\density(\t)dt\), we can derive the expected color \(\C(\r)\) by integrating the probabilities of hitting each particle times its emitted color.
    • Based on the probability fact that: $$ \Pr(\text{no hit before }\t+dt) = \Pr(\text{no hit before }\t)\times\Pr(\text{no hit at }\t) $$$$ \Rightarrow \T(\t+dt) = \T(\t)\times(1-\density(\t)dt), $$ we can solve \(\T(\t)\) within a few steps.

      Derivation Sketch

      The derivation of \(\T(\t)\) is based on the following steps: 12

      1. Rearrange the equation and derive: \(\T'(\t)=-\T(\t)\cdot\density(\t)\).
      2. Rearrange
      3. Integrate
      4. Exponentiate
    • However, \(\C(\r)\) is intractable.
  • Esimate \(\C(\r)\) with quadrature by partitioning \([\tnear, \tfar]\) into \(N\) segments with endpoints \(\{\t_1, \t_2, \dots, \t_{n+1}\}\) and length \(\delta_i=\t_{i+1}-\t_i\).

    Approximating the nested integral, from p.114 of UC Berkeley CS194-26, Neural Radiance Fields 1.

    • Assume each segment has constant volume density \(\density_i\) and color \(\c_i\).
    • Based on the assumptions, derive \(\T(\t)\) for \(\t\in[\t_i,\t_{i+1}]\):

      \[\begin{aligned} \T(\t)&=\T(\t_1 \rightarrow \t_i) \cdot \T(\t_i \rightarrow \t)\\ &=\exp\left(-\int_{\t_1}^{\t_i}\density_i ds\right) \exp\left(-\int_{\t_i}^{\t}\density_i ds\right)\\ &=\T_i \exp(-\density_i(\t - \t_i)),\\ \end{aligned}\]

      where \(\displaystyle \T_i=\exp\left(-\sum_{j=1}^{i-1}\density_j\delta_j\right)\).

      • \(\Pr(\text{no hit within }[\t_i, \t]) = \exp(-\density_i(\t - \t_i))\) is the transmittance within \([\t_i, \t]\).

        How much light is blocked partway through the current segment? from p.120-122 of UC Berkeley CS194-26, Neural Radiance Fields 1.

      • Plugin the derived \(\T(\t)\) into the original equation of \(\C(\r)\):

        \[\begin{aligned} \C(\r) &= \int_{\tnear}^{\tfar}\T(\t)\density(\t)\c(\t,\d)dt\\ &\approx \sum_{i=1}^{N}\T_i(1-\exp(-\density_i\delta_i))\c_i:=\hat{\C}(\r) \end{aligned}\]
        Derivation Sketch

        The derivation of \(\C(\r)\) is based on the following steps: 12

        1. Piecewise approximation: \(\displaystyle \C(\r) \approx \sum_{i=1}^{N}\int_{\t_i}^{\t_{i+1}}\T(\t)\density_i\c_i dt\).
        2. Substitute
        3. Integrate
        4. Cancel
    • However, training with fixed endpoints isn't suitable for learning continuous scene representations.
  • Apply stratified sampling by partitioning \([\tnear, \tfar]\) into \(N\) bins and draw one sample uniformly at random from each bin: $$ \t_i\sim\gU\left[\tnear+\frac{i-1}{N}(\tfar-\tnear), \tnear+\frac{i}{N}(\tfar-\tnear)\right], $$ and use the sampled endpoints to estimate \(\C(\r)\).
  • Connection to alpha compositing by defining alpha values \(\alpha_i=1−\exp(−\sigma_i\delta_i)\):

    \[ \hat{\C}(\r)=\sum_{i=1}^{N}\T_i\alpha_i\c_i \]
    Volume rendering integral estimate, from p.129 of UC Berkeley CS194-26, Neural Radiance Fields 1.

    • \(\Pr(\text{no hit within }[\t_1, \t_i]) = \T_i\) is the transmittance within \([\t_1, \t_i]\).
    • \(\Pr(\text{hit within }[\t_i, \t]) = \alpha_i\) is the segment opacity within \([\t_i, \t]\).
    • \(\Pr(\text{first hit within }[\t_i, \t])\)$ = \Pr(\text{no hit within }[\t_1, \t_i])\times\Pr(\text{hit within }[\t_i, \t])$$ = \T_i\alpha_i$,

3. Implementation Details

  • Positional encoding. Maps the inputs to a higher dimensional space using high frequency functions to represent high-frequency scene content, since neural networks are biased towards learning lower frequency functions. 3
  • Hierarchical volume sampling. Estimate the volume PDF along a ray with an additional coarse network \(F_{\Theta_c}\) and sample from it to allocate samples proportionally to their expected effect on the final rendering. This improves training efficiency by avoiding repeatedly sampling free space and occluded regions that do not contribute to the rendered image/pixel.

    • Two networks, a coarse network \(F_{\Theta_c}\) and a fine network \(F_{\Theta_f}\), are jointly trained.
      • \(F_{\Theta_c}\) enables sampling from regions which are expected to contain visible content.
      • \(F_{\Theta_f}\) is the main network that renders the final image during inference.

    For each ray:

    • Sample a first set of \(N_c\) locations with stratified sampling, and evaluate them on the coarse network \(F_{\Theta_c}\) to compute \((T_i, \alpha_i, \c_i)\) and the coarsely rendered pixels.
      • The expected color of the coarse network can be computed by \(\Ccoarse(\r;\Theta_c)\)$ = \sum_{i=1}^{N_C}\T_i\alpha_i\c_i = \sum_{i=1}^{N_C}w_i\c_i$, where \(w_i=\T_i\alpha_i\).
    • \(\Pr(\text{hit within }[\t_i, \t_{i+1}]) \approx \hat{w}_i = w_i / \sum_{j=1}^{N_C} w_j\) is a piecewise constant approximation of the volume PDF along the ray by normalizing \(w_i\).
      • The normalization is required since each segment along the ray does not necessarily have constant volume density and color.
    • Sample a second set of \(N_f\) locations with inverse transform sampling according to the estimated volume PDF, and evaluate all \(N_c+N_f\) locations on the fine network \(F_{\Theta_f}\) to compute the (finely) rendered pixels based on its expected color \(\Cfine(\r;\Theta_f)\).

4. Training

Optimize \(\Theta\) for each scene, based on a dataset of captured RGB images of each scene, the corresponding camera poses, intrinsic parameters, and scene bounds.

  • The camera poses, intrinsics, and bounds of real data is estimated with the COLMAP structure-from-motion package.
  • At each iteration, sample a batch of camera rays from the set of all pixels in the dataset.
  • Use hierarchical sampling to query \(N_c\) samples from the coarse network and \(N_c + N_f\) samples from the fine network for each ray.
  • Perform (differentiable) volume rendering to render the color of each ray from both sets of samples.
  • Optimize according to the total squared error between the rendered and the ground truth pixel colors for both the coarse and fine renderings. $$ \gL(\Theta_c,\Theta_f)=\sum_{\r\in\gR}\left[\norm{\Ccoarse(\r;\Theta_c)-\C(\r)}_2^2+\norm{\Cfine(\r;\Theta_f)-\C(\r)}_2^2\right] $$
  • Discard the coarse network \(\Ccoarse\) after training and use the fine network \(\Cfine\) for inference.

Experiments

Benchmark

Outperform all prior methods by a wide margin.

Analysis

A visualization of view-dependent emitted radiance, from Figure 3 of Mildenhall et al., 2020.

Ablation study, from Figure 4 of Mildenhall et al., 2020.

Official Resources

Community Resources


  1. The high-level overview of solving \(\T(\t)\) and \(\C(\r)\) can be found in p.105-109 and p.123-126 of Neural Radiance Fields 1 from UC Berkeley CS194-26/294-26: Intro to Computer Vision and Computational Photography

  2. The formal derivation of \(\T(\t)\) and \(\C(\r)\) can be found in Tagliasacchi et al., Volume Rendering Digest (for NeRF), 2022 

  3. Rahaman et al., On the Spectral Bias of Neural Networks, ICML 2019 

Comments