0%

Contribution Weight in ReSTIR

We denote as the number of spatial samples and as the target function. Assuming we are only doing initial resampling for frame and we use as the mis weight when combining neighboring reservoirs.


At frame , we first sample initial candidates with resampling weights for each pixel .

Then we do spatial reuse and we get unbiased weight for each pixel .

Here, stands for the spatial samples selected by pixel .


At frame , we do spatial reuse again and we get for each pixel .

We expand and we get .


At frame , we do spatial reuse again and we get .
Similarly, we expand and we get .

We can calculate this contribution by simply letting to be the outermost:

Here, we can already see may occur multiple times in that term (i.e., there may be multiple pairs of ).

The above analysis is based on some given . When and become random, we need to calculate the expectation :

Here, represents the probability of spreading to . For example, if we use square and spatial samples, . And the term can be calculated by enumerating all and evaluate their probability of being the final chosen sample.


At frame , the expected contribution from pixel to pixel becomes:

Here, represents a pixel trajectory from pixel to pixel and , and represents the resampled sample for each pixel. Our reuse pattern basically controls and determines the expectation term implicitly.

We can take apart and make the equation:

.

In this case, to simplify the contribution, we take the expectation term as or a constant value (Note: This simplification may be wrong, need to check it more carefully!), and the contribution becomes

This is a lot easier for us to analyze the contribution and we just need to care about by choosing our reuse pattern.

In conclusion, our reuse pattern actually controls the contribution weight of pixel to pixel .


Analysis of

If we only focus on this term, we can find it is actually the expected occurrence of sample in pixel , i.e.,

Therefore, it builds a relationship between double counting and the sample weight. Our goal is now clearer: we can control the occurrence to control the sample weight.


Now we analyze the expected occurrence . We can define as the expected occurrence of in at frame . Then we can write down the following recursion:

with the initial value .

Contribution Matrix
We can find the recursion can be written as matrix multiplication.

Defining as the contribution matrix, and means the expected contribution from to in one frame ( in ReSTIR).
And we can get

When the contribution matrix keeps changing each frame and we get at the k-th frame, the expected occurrence becomes:

The random contribution matrix happens in reality and it is sparse each frame. Instead of using the same contribution matrix with expectations, ReSTIR uses random contribution matrix each frame. But the expected occurrence keeps unchanged.

We can easily write down the contribution matrix of both ReSTIR and cone reuse.

But this analysis is simplied a lot by taking MIS-weight as (it is assuming we are using M-cap or linear-M). For actual cone reuse without M-cap, the number of sample count will affect the weight too.


Possible Directions:

  1. Can we get the same final contribution matrix with fewer intermediate matrices?
  2. Can we get better fianl contribution matrix (how to define this)?

The next questions:

  1. Now we are also assuming pixel at frame does not inherit from frame . In reality, we should consider this and know why keeping the same reuse pattern brings that huge problem (I think it brings a exponentially growing weight for each sample).
  2. How to choose a good reuse pattern to weight each sample better (but we should understand the weight better first)?
  3. How does correlation relate to the reuse pattern and the weighting term?