Blog Archive About

Psychopath Renderer

a slightly psychotic path tracer

2018 - 11 - 21

Random Notes on Future Direction

This post is not going to be especially coherent. I wanted to jot down some thoughts about the future direction of Psychopath, given the decisions in my previous post. Normally I would do this in a random text file on my computer, but I thought it might be interesting to others if I just posted it to the blog instead. It should be fun to check back on this later and see which bits actually ended up in Psychopath.

Things to Keep

There are some things about Psychopath's existing implementation (or plans for implementation) that I definitely want to preserve:

  • Spectral rendering via hero wavelength sampling. I'm increasingly convinced that spectral rendering is simply the correct way to handle color in a (physically based) renderer. Not even for spectral effects, just for correct color handling.

  • Curved surface rendering and displacements should be "perfect" for most intents and purposes. Users should only ever have to deal with per-object tesselation settings in exceptional circumstances.

  • Everything should be motion-blurrable. This is a renderer built for animation.

  • Full hierarchical instancing.

  • Efficient sampling of massive numbers of lights.

  • Filter Importance Sampling for pixel filtering. This simplifies a lot of things, and I've also heard rumors that it makes denoising easier because it keeps pixels statistically independent of each other. It does limit the pixel filters that can be used, but I really don't think that's a problem in practice.

Shade-Before-Hit Architecture

Following in Manuka's footsteps will involve some challenges. Below are some thoughts about tackling some of them.

  • All processing that involves data inter-dependence should be done before rendering starts. For example, a subdivision surface should be processed in such a way that each individual patch can be handled completely independently.

  • Maybe even do all splitting before rendering starts, and then the only thing left at render time is (at most) dicing and shading. Not sure yet about this one.

  • Use either DiagSplit or FracSplit for splitting and tessellation (probably). DiagSplit is faster, but Fracsplit would better avoid popping artifacts in the face of magnification. Both approaches avoid patch inter-dependence for dicing, which is the most important thing.

  • Since we'll be storing shading information at all micropolygon vertices, we'll want compact representations for that data:

    • For data that is constant across the surface, maybe store it at the patch or grid level. That way only textured parameters have to be stored per-vertex.

    • We'll need a way to compactly serialize which surface closures are on a surface and how they should be mixed/layered. Probably specifying the structure on the patch/grid level, and then the textured inputs on the vertex level would make sense...? Whatever it is, it needs to be both reasonably compact and reasonably efficient to encode/decode.

    • For tristimulus color values, using something like the XYZE format (a variant of RGBE) to store each color in 32 bits might work well.

    • It would be good to have multiple "types" of color. For example: XYZ for tristimulus inputs, a temperature and energy multiplier for blackbody radiation, and a pointer to a shared buffer for full spectral data. More color types could be added as needed. This would provide a lot of flexibility for specifying colors in various useful ways. Maybe even have options for different tristimulus spectral upsampling methods?

    • For shading normals, using the Oct32 encoding from "Survey of Efficient Representations for Independent Unit Vectors" seems like a good trade-off between size, performance, and accuracy.

    • For scalar inputs, probably just use straight 16-bit floats.

  • The camera-based dicing-rate heuristic is going to be really important:

    • It can't necessarily be based on a standard perspective camera projection: supporting distorted lens models like fisheye in the future would be great.

    • It needs to take into account instancing: ideally we want only one diced representation of an instance. Accounting for all instance transforms robustly might be challenging (e.g. one instance may have one side of the model closer to the camera, while another may have a different side of the same model closer).

    • It needs to account for ~~camera motion blur~~ motion blur in general in some reasonable way. This probably isn't as important to get robustly "correct" because anything it would significantly affect would also be significantly blurred. But having it make at least reasonable decisions so that artifacts aren't visually noticeable is important.

  • Having a "dial" of sorts that allows the user to choose between all-up-front processing (like Manuka) and on-the-fly-while-rendering processing (closer to what the C++ Psychopath did) could be interesting. Such a dial could be used e.g. in combination with large ray batches to render scenes that would otherwise be too large for memory. What exactly this dial would look like, and what its semantics would be, however, I have no idea yet.

Interactive Rendering

Manuka's shade-before-hit architecture necessitates a long pre-processing step of the scene before you can see the first image. Supporting interactive rendering in Psychopath is something I would like to do if possible. Some thoughts:

  • Pretty much all ray tracing renderers require pre-processing before rendering starts (tessellation, displacement, BVH builds, etc.). For interactive rendering they just maintain a lot of that state, tweaking the minimum data needed to re-do to the render. Even with a shade-before-hit architecture, this should be feasible.

  • Having a "camera" that defines the dicing rates separately from the interactive viewport will probably be necessary, so that the whole scene doesn't have to be re-diced and re-shaded whenever the user changes their view. That separate dicing "camera" doesn't have to be a real projective camera—there could be dicing "cameras" designed specifically for various interactive rendering use-cases.

Color Management

Psychopath is a spectral renderer, and I intend for it to stay that way. Handling color correctly is really important to my goals for Psychopath, and I believe spectral rendering is key to that goal. But there is more to correct color management than just that.

  • Handling tristimulus spaces with something like Open Color IO would be good to do at some point, but I don't think now is the time to put the energy into that. A better focus right now, I think, is simply to properly handle a curated set of tristimulus colors spaces (including e.g. sRGB and ACES AP0 & AP1).

  • Handling custom tone mapping, color grading, etc. will be intentionally outside the scope of Psychopath. Psychopath will produce HDR data which can then be further processed by other applications later in the pipeline. In other words, keep Psychopath's scope in the color pipeline minimal. (However, being able to specify some kind of tone mapping for e.g. better adaptive sampling may be important. Not sure how best to approach that yet.)

  • Even with it being out of scope, having a single reasonable tone mapper for preview purposes only (e.g. when rendering directly to PNG files) is probably reasonable. If I recall correctly, ACES specifies a tone mapping operator. If that's the case, using their operator is likely a good choice. However, I haven't investigated this properly yet.

  • One of the benefits of spectral rendering is that you can properly simulate arbitrary sensor responses. Manuka, for example, supports providing spectral response data for arbitrary camera sensors so that VFX rendering can precisely match the footage it's going to be integrated with. I've thought about this for quite some time as well (even before reading the Manuka paper), and I would like to support this. This sort of overlaps with tone mapping, but both the purpose and tech are different.