Tag Archives: Physics

Ray Tracing Algebraic Surfaces

By: philzook58

Re-posted from: https://www.philipzucker.com/ray-tracing-algebraic-surfaces/

Ray tracing is a natural way of producing computer images. One takes a geometrical ray that connects the pinhole of the camera to a pixel of the camera and find where it hits objects in the scene. You then color the pixel the color of the object it hit.

You can add a great deal of complexity to this by more sophisticated sampling and lighting, multiple bounces, strange surfaces, but that’s it in a nutshell.

A very popular tutorial on this is Ray Tracing in One Weekend https://raytracing.github.io/

There are a couple ways to do the geometrical collision detection part. One is to consider simple shapes like triangles and spheres and find closed form algorithms for the collision point. This is a fast and simple approach and the rough basis of the standard graphics pipeline. Another is to describe shapes via signed distance functions that tell you how far from the object you are and use ray-marching, which is a variant of newton’s method iteratively finding a position on a surface along the ray. ShaderToys very often use this technique.

If you describe your objects using algebraic (polynomial) equations, like x^2 + y^2 + z^2 - 1 describes a sphere, there is the possibility of using root finding algorithms, which are readily available. I thought this was kind of neat. Basically the ray hitting the concrete pixel (x_0, y_0) can be parameterized by a univariate polynomial (x,y,z) = (\lambda x_0, \lambda y_0, \lambda) , which can be plugged into the multivariate polynomial (\lambda x_0)^2 + (\lambda y_0)^2 + \lambda^2 - 1. This is a univariate polynomial which can be solved for all possible collision points via root finding. We filter for the collisions that are closest and in front of the camera. We can also use partial differentiation of the surface equations to find normal vectors at that point for the purposes of simple directional lighting.

As is, it really isn’t very fast but it’s short and it works.

Three key packages are

using Images
using LinearAlgebra
using TypedPolynomials
using Polynomials

function raytrace(x2,y2,p)
    z = Polynomials.Polynomial([0,1])
    
    # The ray parameterized by z through the origin and the point [x2,y2,1] 
    x3 = [z*x2, z*y2, z]

    # get all the roots after substitution into the surface equation 
    r = roots(p(x=>x3)) 
    

    # filter to use values of z that are real and in front of the camera
    hits = map(real, filter( x -> isreal(x) & (real(x) > 0.0)  , r)) 

    if length(hits) > 0
        l = minimum(hits) # closest hit only
        x3 = [z(l) for z in x3]
        # get normal vector of surface at that point
        dp = differentiate(p, x) 
        normal = normalize([ z(x=> x3)  for z in dp])
        # a little directional and ambient shading
        return max(0,0.5*dot(normal,normalize([0,1,-1]))) + 0.2 
    else 
        return 0 # Ray did not hit surface
    end
end

@polyvar x[1:3]

# a sphere of radius 1 with center at (0,0,3)
p = x[1]^2 + x[2]^2 + (x[3] - 3)^2 - 1 

box = -1:0.01:1
Gray.([ raytrace(x,y,p) for x=box, y=box ])

Sphere.

@polyvar x[1:3]
R = 2
r = 1

# another way of doing offset
x1 = x .+ [ 0, 0 , -5 ] 

# a torus at (0,0,5)
# equation from https://en.wikipedia.org/wiki/Torus
p = (x1[1]^2 + x1[2]^2 + x1[3]^2 + R^2 - r^2)^2 - 4R^2 * (x1[1]^2 + x1[2]^2) 

box = -1:0.005:1
img = Gray.([ raytrace(x,y,p) for x=box, y=box ])
save("torus.jpg",img)
Torus

Some thoughts on speeding up: Move polynomial manipulations out of the loop. Perhaps partial evaluate with respect to the polynomial? That’d be neat. And of course, parallelize

Walk on Spheres Method in Julia

By: Philip Zucker

Re-posted from: https://www.philipzucker.com/walk-on-spheres-method-in-julia/

I saw a cool tweet (and corresponding conference paper) by Keenan Crane

http://www.cs.cmu.edu/~kmcrane/Projects/MonteCarloGeometryProcessing/index.html

I was vaguely aware that one can use a Monte Carlo method to solve the boundary value Laplace equation \nabla^2 \phi = 0 , but I don’t think I had seen the walk on spheres variant of it before. I think Crane’s point is how similar all this is to stuff graphics people already do and do well. It’s a super cool paper. Check it out.

Conceptually, I think it is plausible that the Laplace equation and a monte carlo walk are related because the static diffusion equation \nabla^2 n = 0 from Fick’s law ultimately comes from the brownian motion of little guys wobbling about from a microscopic perspective.

Slightly more abstractly, both linear differential equations and random walks can be describe by matrices, a finite difference matrix (for concreteness) K and a transition matrix of jump probabilities T. The differential equation is discretized to Kx=b and the stationary probability distribution is Tp=b, where b are sources and sinks at the boundary.

The mean value property of the Laplace equation allows one to speed this process up. Instead of having a ton of little walks, you can just walk out randomly sampling on the surface of big spheres. en.wikipedia.org/wiki/Walk-on-spheres_method. Alternatively you can think of it as eventually every random walk exits a sphere, and it is at a random spot on it.

So here’s the procedure. Pick a point you want the value of \phi at. Make the biggest sphere you can that stays in the domain. Pick a random point on the sphere. If that point is on the boundary, record that boundary value, otherwise iterate. Do this many many times, then the average value of the boundaries you recorded it the value of \phi

This seems like a good example for Julia use. It would be somewhat difficult to code this up efficiently in python using vectorized numpy primitives. Maybe in the future we could try parallelize or do this on the GPU? Monte carlo methods like these are quite parallelizable.

The solution of the 1-d Laplace equation is absolutely trivial. If the second derivative is 0, then $\phi = a + b x $. This line is found by fitting it to the two endpoint values.

So we’re gonna get a line out

using LinearAlgebra
avg = 0
phi0 = 0
phi1 = 10
x_0 = 0.75
function monte_run(x)
    while true
            l = rand(Bool) # go left?
            if (l && x <= 0.5) # finish at left edge 0
                return phi0
            elseif (!l && x >= 0.5) # finish at right edge 1
                return phi1
            else
                if x <= 0.5 # move away from 0
                    x += x
                else
                    x -= 1 - x # move away from 1
                end
            end
    end
end

monte_runs = [monte_run(x) for run_num =1:100, x=0:0.05:1 ]
import Statistics
avgs = vec(Statistics.mean( monte_runs , dims=1))
stddevs = vec(Statistics.std(monte_runs, dims=1)) ./ sqrt(size(monte_runs)[1]) # something like this right?

plot(0:0.05:1, avgs, yerror=stddevs)
plot!(0:0.05:1,  (0:0.05:1) * 10 )

And indeed we do.

You can do a very similar thing in 2d. Here I use the boundary values on a disc corresponding to x^2 – y^2 (which is a simple exact solution of the Laplace equation).



function monte_run_2d(phi_b, x)
    while true
            r = norm(x)
            if r > 0.95 # good enough
                return phi_b(x)
            else
                dr = 1.0 - r #assuming big radius of 1
                θ = 2 * pi * rand(Float64) #
                x[1] += dr * cos(θ)
                x[2] += dr * sin(θ)
            end
    end
end


monte_run_2d( x -> x[1],  [0.0 0.0] )


monte_runs = [monte_run_2d(x -> x[1]^2 - x[2]^2 ,  [x 0.0] ) for run_num =1:1000, x=0:0.05:1 ]

import Statistics
avgs = vec(Statistics.mean( monte_runs , dims=1))
stddevs = vec(Statistics.std(monte_runs, dims=1)) ./ sqrt(size(monte_runs)[1]) # something like this right?
plot(0:0.05:1, avgs, yerror=stddevs)
plot!(0:0.05:1,  (0:0.05:1) .^2 )

There’s more notes and derivations in my notebook here https://github.com/philzook58/thoughtbooks/blob/master/monte_carlo_integrate.ipynb