I've been away from Psychopath for a while working on other projects, but recently I stumbled upon a blog post by Yining Karl Li, entitled "Mipmapping with Bidirectional Techniques". In it, he describes his solution to a problem I've been pondering for a while: how to handle texture filtering in the context of bidirectional light transport.
The problem essentially comes down to this: texture filtering should be done with respect to the projected area on screen, but when tracing rays starting from a light source you don't know what that projected area is going to be yet. In Psychopath's case it's even worse because it applies not just to texture filtering but also to dicing rates for geometry. In Psychopath you can't even calculate ray intersections without a filter width. So this problem has been bugging me for a while.
Yining explores an approach to this problem that I've also considered, which is to have the filter width simply be independent of the rays being traced. In other words: screw ray differentials, just use camera-based heuristics. The benefit of this approach is that every point on every surface has a single well-defined filter width, regardless of where rays are coming from or how they're being generated. The down side is that there are circumstances (such as magnification through a lens) where those filters become too blurry.
These are all things I've thought about before, and I've gone back and forth many times about how I want to approach this. However, Yining's post also linked to a paper from Weta Digital about their renderer Manuka. And that sent me down a rabbit hole that has me reconsidering how I want Psychopath's entire architecture to work.
Tying Geometry and Shading Together
There are a lot of cool things about Manuka, but the one that stuck out to me—and the one that has me reconsidering a lot of things—is how they handle shading.
Generally speaking, ray tracing renderers do shading when a ray hits something. But Manuka takes a radically different approach. Manuka does all of its shading before rendering even starts, by dicing all geometry into sub-pixel polygons and calculating shading at the vertices.
If you're at all familiar with the Reyes rendering architecture, that should sound really familiar. The difference is that instead of baking colors into the geometry like in Reyes, they bake surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.
Honestly, I think this is genius. There's an elegance to essentially saying, "geometry and shading are the same thing". It's an elegance that Reyes had, but that I failed to reproduce in Psychopath—even in the early days when I was essentially trying to do path traced Reyes.
One of the fantastic outcomes of this approach is that the scene has a clear static definition regardless of what rays are being traced. This handily solves the bidirectional texture filtering problem, as well as Psychopath's related bidirectional dicing problem. It also, of course, has the afore-mentioned problem with magnification. But I think that's a pretty reasonable trade-off as long as you provide ways for people to work around it when needed.
Having said all of this, there are other trade-offs that Manuka makes that I'm not so inclined to reproduce in Psychopath. Specifically, Manuka literally does all of their dicing and shading up-front and keeps it all in memory to be traced against. I think their reasons for doing that are very sound, but that just doesn't seem interesting to me. So I would still like to explore a more dynamic approach, closer to what I've already been doing.
A New Architecture
So with all of that background out of the way, this is roughly the architecture I am now envisioning for Psychopath:
-
Scene data is effectively statically defined: dicing rates, texture filters, etc. are all determined independent of the rays being traced, and are therefore consistent between all rays.
-
Even though the scene data is conceptually static, it can still be computed dynamically, as long as the result is deterministic and independent of the rays being traced.
-
Geometry and shading are the same thing: shading is defined at the same points that geometry is defined, and should generally be sub-pixel in size.
The specifics of how these decisions are going to play out is still a bit unknown to me. But it's what I'll be working towards and experimenting with for Psychopath's next steps (albeit slowly, as my time is somewhat limited these days).
This is also exciting to me because in some sense this is getting back to Psychopath's original roots. I started the project because I wanted to see if I could merge Reyes rendering and path tracing in a useful way. I've made a lot of detours since then (many of them interesting and worthwhile), but fundamentally I think that's still the idea that intrigues me most. And I think this is a good step in that direction.
Lastly: a shout out to Johannes Hanika, who co-authored the Manuka paper, and somehow seems to be involved in most of the papers I've found genuinely inspiring over the last several years. If you're reading this, Johannes, I would love to buy you a beer!