Blog Archive About

Psychopath Renderer

a slightly psychotic path tracer

2020 - 04 - 14

Building a Sobol Sampler

(For an update on Psychopath's Sobol sampler, please also see the next post in this series: Sobol Sampling - Take 2.)

Up until recently I was using Leonhard Grünschloß's Faure-permuted Halton sampler as my main sampler in Psychopath. I even ported it to Rust when I made the switch from C++. And it works really well.

I've also had a Rust port of Grünschloß's Sobol sampler in Psychopath's git repo for a while, and from time to time I played with it. But it never seemed notably better than the Halton sampler in terms of variance, and it was significantly slower at generating samples. This confused me, because Sobol seems like the go-to low discrepancy sequence for 3d rendering, assuming you want to use low discrepancy sequences at all.

However, I recently stumbled across some new (to me) information, which ultimately sent me down a rabbit hole implementing my own Sobol sampler. I'd like to share that journey.

Incidentally, if you want to look at or use my Sobol sampler, you can grab it here. It's MIT licensed. (Note from the future: this link is to the code as of the writing of this post. See the above-linked second post for a significantly improved version.)

Don't Offset Sobol Samples

The first two things I learned are about what not to do. And the title of this section actually refers to both of those things simultaneously:

  1. Cranely Patterson Rotation.
  2. Randomly offseting your starting index into the sequence for each pixel.

These are two different strategies for decorrelating samples between pixels. The first offsets where the samples are in space, and the second offsets where you are in the sequence. Both are bad with Sobol sequences, but for different reasons.

Cranley Patterson Rotation is bad because it increases variance. Not just apparent variance, but actual variance. It's still not entirely clear to me why that's the case—it feels counter-intuitive. But I first found out about it in the slides of a talk by Heitz, et al. about their screen-space blue noise sampler. It's also been borne out in some experiments of my own since then.

So the first lesson is: don't use Cranley Patterson Rotation. It makes your variance worse.

Randomly offsetting what index you start at for each pixel is bad for a completely different reason: it's slow. Due to the way computing Sobol samples works, later samples in the sequence take longer to compute. This is what was making it slow for me. I already used this technique with the Halton sampler, to great success. So I assumed I could just apply it to Sobol as well. And you can. It works correctly. It just ends up being comparatively slow.

So the second lesson is: don't offset into your Sobol sequence (at least not by much).

The Right Way to Decorrelate

So if you can't do Cranley Patterson Rotation, and you can't offset into your sequence, how can you decorrelate your pixels with a Sobol sampler?

The answer is: scrambling.

There are two ways (that I'm aware of) to scramble a Sobol sequence while maintaining its good properties:

  1. Random Digit Scrambling
  2. Owen Scrambling

Random digit scrambling amounts to just xoring the samples with a random integer, using a different random integer for each dimension. The PBRT book covers this in more detail, but it turns out that doing this actually preserves all of the good properties of the Sobol sequence. So it's a very efficient way to decorrelate your pixels.

I gave that a try, and it immediately made the Sobol sampler comparable to the Halton sampler in terms of performance. So it's definitely a reasonable way to do things.

But then, due to a semi-unrelated discussion with the author of Raygon, I ended up re-reading the paper Progressive Multi-Jittered Sample Sequences by Christensen et al. The paper itself is about a new family of samplers, but in it they also compare their new family to a bunch of other samplers, including the Sobol sequence. And not just the plain Sobol sequence, but also the rotated, random-digit scrambled, and Owen-scrambled Sobol sequences.

The crazy thing that I had no idea about, and which they discuss at length in the paper, is that Owen scrambled Sobol actually converges faster than plain Sobol. That just seems nuts to me. Basically, Owen scrambling is the anti-Cranley-Patterson: it's a way of decorrelating your pixels while actually reducing variance. So even if you're not trying to decorrelate your pixels, it's still a good thing to do.

(As an interesting aside: random digit scrambling is a strict subset of Owen scrambling. Or, flipping that around, Owen scrambling is a generalization of random digit scrambling. But what's nice about that is you can use the same proof to show that both approaches preserve the good properties of Sobol sequences.)

Implementing Owen Scrambling Efficiently

The trouble with Owen scrambling is that it's slow. If you do some searches online, you'll find a few papers out there about implementing it. But most of them are really obtuse, and none of them are really trying to improve performance. Except, if I recall correctly, there's one paper about using the raw number crunching power of GPUs to try to make it faster. But that's not really that helpful, since you probably want to use your GPU for other number crunching.

So it basically seems like Owen-scrambling is too slow to be practical for 3d rendering.

Except! There's a single sentence in the "Progressive Multi-Jittered Sample Sequences" paper that almost seems like an afterthought. In fact, I almost missed it:

We do Owen scrambling efficiently with hashing – similar in spirit to Laine and Karras [LK11], but with a better hash function provided by Brent Burley.

"LK11" refers to the paper Stratified sampling for stochastic transparency by Laine et al. The really weird thing is that this paper isn't even about Sobol sampling. But they, too, have a one-liner that's really important:

This produces the same result as performing an Owen scramble [...]

What they're referring to here is a kind of hashing operation. They use it to decorrelate a standard base-2 radical inverse sequence, but the same approach applies just as well to Sobol sequences.

But you can't just use any hash function. In fact, you have to use a really specific, weird, special kind of hash function that would be a very poor choice for any application that you would typically use hashing for.

Here's the basic idea: the key insight Laine et al. provide is that Owen scrambling is equivalent to a hash function that only avalanches downward, from higher bits to lower bits. In other words, a hash function where each bit only affects bits lower than itself. This is a super cool insight! And it means that if we can design a high-quality, efficient hash function with this property, then we have efficient Owen scrambling as well.

It turns out, though, that developing a high-quality hash function that meets these criteria is pretty challenging. For example, the hash function they present in the paper, while fast, is pretty poor quality. Nevertheless, I decided to take a crack at it myself.

So far, I've stuck to the general approach from Laine et al., but worked on optimizing the constants and tweaking a few things. I don't think I've gained any real insights beyond their paper, but I have nevertheless made some improvements through those tweaks. At this point, I have a hash function that I would call "okay" for the purposes of Owen scrambling. But there is very clearly still a lot of room for improvement.

I'm extremely curious what the implementation from "Progressive Multi-Jittered Sample Sequences" looks like. And I'm especially curious if there are any interesting insights behind it regarding how to construct this bizarre kind of hash function.

My Sobol Sampler

Again, you can grab my Sobol sampler here if you like.

It includes implementations of all the (non-slow) decorrelation approaches I mentioned in this post:

  • Cranley-Patterson rotation
  • Random Digit Scrambling
  • And hash-based Owen Scrambling

It also includes, of course, plain non-scrambled Sobol.

If you do make use of this somewhere, I strongly recommend using the Owen-scrambled version. But it can be fun to play with the others as well for comparison.

However, one thing I noticed in my testing is that Cranley-Patterson rotation seems to do a better job of decorrelating pixels. The two scrambling approaches seem to exhibit some correlation artifacts in the form of clusters of mildly higher-variance splotches in certain scenes. It's mild, and Owen scrambling still seems to win out variance-wise over Cranely-Patterson. But still. This is definitely a work in progress, so be warned.

Regardless, I've had a lot of fun with this, and I've learned a lot. Pretty much everything I talked about in this post was like black magic to me before diving head-first into all of it. I definitely recommend looking into these topics yourself if this post was at all interesting to you.

2020 - 04 - 12

Notes on Color: 01

During my time working on Psychopath, I've slowly gained a better understanding of color science, color management, and other color related topics. I'd like to jot down some notes about my current understanding of these topics, because the angle I learned them from is slightly non-standard. My hope is that this may help other people gain a better understanding of the topic as well.

I don't know how many entries in this series there will be. Maybe this will be the only one! Who knows, I'm lazy sometimes. But my hope is to make additional posts about this in the future. I have a lot that I'd like to write about.

Color Blindness

First, a fun fact about me: I'm partially color blind. More specifically, I'm an anomalous trichromat, meaning that, compared to normal color-vision people, one of the three cones in my eyes is shifted in its spectral sensitivity.

One of the more interesting experiences I've had was the process of being diagnosed with color blindness. Most people are familiar with the Ishihara color blindness test (even if not by name). You're shown a series of circles made up of colored dots, and each circle has an embedded image of some kind. If you can see the image then you have normal color vision, and if you can't then you're color blind to some extent.

What most people don't know, however, is that the full Ishihara test also includes the reverse: images that only color blind people can see but normal color vision people can't. This surprises a lot of people, and trying to understand how that works sent me down a fun rabbit hole about color vision a couple of decades ago.

That was my first foray into color science and color perception, and my understanding of color management and related topics is built on what I learned back then. I'm going to present color from that same perspective because I haven't seen it approached that way in the graphics community before, and I personally think it helps to clarify a lot of things that are otherwise quite... vague and wishy washy.

What is Color Vision?

The first step towards understanding almost anything about color is understanding color vision and how it relates to actual physical light.

Physical light is a fairly complex phenomenon (see e.g. polarization, various quantum effects, etc.). And if I'm being completely honest, I don't fully understand everything about it. However, the aspects of light relevant to color perception are (thankfully) pretty simple, and that's what I'll be talking about here.

To explain light and how it relates to color vision, I like to make an analogy between light and sound. Sound is made up of pressure waves (typically in the air) of various frequencies and amplitudes. For example, the wave of a single pure tone looks like this:

Graph of a tone

A louder version of that same tone looks like this:

Graph of a louder tone

And a higher-pitched version looks like this:

Graph of a higher-pitched tone

However, most sounds aren't a single pure tone. Most sounds have many tones of varying pitch and loudness layered on top of each other. This is what produces the rich harmonies and textures we experience in sounds. For example, here is a graph of a more complex sound: a chord of three notes being played on a piano:

Wave graph of three piano notes

Quick question: can you tell that there are three notes being played in that graph? Me either. That's because these waveform-style graphs aren't a good visualization for this kind of discussion. A more useful graph for us is one that visualizes how loud the sound is at each frequency. Here is the same piano sound graphed1 that way:

Spectrograph of three piano notes

Notice how you can make out the three separate notes, which show up as three spikes on the graph. Neat!

Light is also made up of waves, but they are electromagnetic waves rather than pressure waves. Nevertheless, we can graph light in the same way. For example, here is a graph2 of highly saturated yellow light, showing its frequency spiking around the yellow part of the spectrum:

Spectrograph of a pure yellow

With sound, we perceive different frequencies as being different pitches. But with light, we perceive different frequencies as being different colors. If you had a machine that could produce any frequency of light, and had it slowly move through all the frequencies of visible light, you would see it produce all the colors of the rainbow one after another, as shown along the bottom of the graph.

However, like sound, most light isn't made up of a single pure frequency. Most light has a range of frequencies layered on top of each other. Unfortunately, although our ears are capable of hearing multiple frequencies at once, our eyes aren't that good, and are unable to perceive the equivalent of the three piano notes in color. And that brings us to the next topic!

Human Color Perception & Metamers

Our ears have thousands of tiny hairs that are each sensitive to different frequencies, which is what allows our hearing to distinguish so many frequencies at once. Our eyes, on the other hand, only have three types of light sensors for seeing color, which are called "cones". Each type of cone is sensitive to different frequencies of light: one is sensitive to the long wavelengths ("L"), one to the medium wavelengths ("M"), and one to the short wavelengths ("S"). We can graph the sensitivities of each type of cone like this:

Spectrograph of cone sensitivities

The curve for each cone represents how sensitive it is at different frequencies. Importantly, the cones cannot distinguish where within their sensitivity range they are being stimulated. For example, the L cone can't tell if it's being stimulated at its peak or at the far end of its left tail. All it knows is that it's being stimulated somewhere within that range.

Nevertheless, because the sensitivities of the cones overlap, if they all work together, they can triangulate a single spike (or "tone") in the light spectrum. For example, a pure yellow spike would stimulate both the L and M cones roughly equally:

Spectrograph of a pure yellow with cone sensitivities overlaid

So we can make a pretty good guess that when the L and M cones are stimulated equally, the light spectrum is probably spiking at yellow. However, we could also have two spikes:

Spectrograph of a red and green together

And... well, the cones can't really tell the difference between that and the single yellow spike.3

This is, in fact, how our digital color displays (such as computer monitors) work. I imagine you already know that they use red, green, and blue lights at different intensities to create different colors. But what may not have occurred to you is that the reason that works is because all humans are color blind.

When a computer monitor turns on the red and green emitters of a pixel to display yellow, it's taking advantage of the fact that we can't tell the difference between a harmony of red + green and an actual pure yellow. It is fooling our eyes into seeing a yellow that isn't really there. If our eyes were as good as our ears, they wouldn't be fooled—we'd see both the red and green simultaneously, not yellow.

The big take-away here is that there are a lot of light spectrums that our eyes cannot distinguish. And there is a name for that phenomenon: two or more different light spectrums that appear the same to a given observer are called metamers.

The difference between color blind people and people with normal color vision isn't the existence of metamers, it's the quantity of metamers and which spectrums are metamers. To drive that point home: there are animals such as the mantis shrimp that have far, far better color vision than us humans do, because the mantis shrimp has not just three color sensors in its eyes but at least twelve. Compared to the mantis shrimp, normal color vision people are horribly, horribly color blind—only slightly less so than people with impaired color vision.

This, of course, makes me feel a little better as a color blind person. But the reason it's actually relevant to discussion of color management, 3d rendering, etc. is because it highlights the difference between physical light and human color perception.

Light spectrums are a well defined physical phenomenon: you can measure them, you can make a graph, and that is simply what the light spectrum is. Color, on the other hand, is a perceptual experience caused by light. Light doesn't have a color, it is perceived as a color by a given observer, and it depends on the particular observer. You can still measure and graph color, but only on a per-observer basis.

Wrapping Up

If I've done a good job explaining things, then hopefully you've grasped everything here reasonably well. The concepts I've just introduced are, I believe, critical for understanding color perception and color management, so if any of this was unclear please let me know!

Lastly, for the actual color scientists out there: I realize that I've glossed over some things here. Sorry about that. Nevertheless, I hope you think this is a good introduction to the topic.


This isn't super important, but it's a fun little fact. I said earlier that our eyes can't see harmonies in light spectrums, but that's not quite true. There is one harmonic color that we can see (albeit a very simple one): purple.

We see purple when there is energy in the long and short ends of the visible spectrum, but not the medium length part, like this:

Spectrograph of a red and blue together

Our eyes can distinguish that because both the L and S cones will be strongly stimulated, but not the M cone. So for anyone who's favorite color is purple: congratulations, it's a very special color!


  1. For the pedantic, the data for this graph has actually been further processed to isolate the fundamental frequencies of the piano. But the principle stands.

  2. Most of the graphs in this post are just approximated by hand, so please don't take them as precise in any way. That's also why I (intentionally) left out the scales of the axes.

  3. The astute among you might notice that the S cone actually gets stimulated a little, so the red-green combination isn't quite identical to the pure yellow in how it stimulates the cones. However, if you just add a little bit of energy everywhere in the pure yellow graph (making it a little less saturated of a yellow) then it would be. This is the first hint of something fairly interesting, which is that an RGB display can't reproduce all the fully saturated colors, even if the three RGB colors are themselves fully saturated.

2018 - 11 - 21

Random Notes on Future Direction

This post is not going to be especially coherent. I wanted to jot down some thoughts about the future direction of Psychopath, given the decisions in my previous post. Normally I would do this in a random text file on my computer, but I thought it might be interesting to others if I just posted it to the blog instead. It should be fun to check back on this later and see which bits actually ended up in Psychopath.

Things to Keep

There are some things about Psychopath's existing implementation (or plans for implementation) that I definitely want to preserve:

  • Spectral rendering via hero wavelength sampling. I'm increasingly convinced that spectral rendering is simply the correct way to handle color in a (physically based) renderer. Not even for spectral effects, just for correct color handling.

  • Curved surface rendering and displacements should be "perfect" for most intents and purposes. Users should only ever have to deal with per-object tesselation settings in exceptional circumstances.

  • Everything should be motion-blurrable. This is a renderer built for animation.

  • Full hierarchical instancing.

  • Efficient sampling of massive numbers of lights.

  • Filter Importance Sampling for pixel filtering. This simplifies a lot of things, and I've also heard rumors that it makes denoising easier because it keeps pixels statistically independent of each other. It does limit the pixel filters that can be used, but I really don't think that's a problem in practice.

Shade-Before-Hit Architecture

Following in Manuka's footsteps will involve some challenges. Below are some thoughts about tackling some of them.

  • All processing that involves data inter-dependence should be done before rendering starts. For example, a subdivision surface should be processed in such a way that each individual patch can be handled completely independently.

  • Maybe even do all splitting before rendering starts, and then the only thing left at render time is (at most) dicing and shading. Not sure yet about this one.

  • Use either DiagSplit or FracSplit for splitting and tessellation (probably). DiagSplit is faster, but Fracsplit would better avoid popping artifacts in the face of magnification. Both approaches avoid patch inter-dependence for dicing, which is the most important thing.

  • Since we'll be storing shading information at all micropolygon vertices, we'll want compact representations for that data:

    • For data that is constant across the surface, maybe store it at the patch or grid level. That way only textured parameters have to be stored per-vertex.

    • We'll need a way to compactly serialize which surface closures are on a surface and how they should be mixed/layered. Probably specifying the structure on the patch/grid level, and then the textured inputs on the vertex level would make sense...? Whatever it is, it needs to be both reasonably compact and reasonably efficient to encode/decode.

    • For tristimulus color values, using something like the XYZE format (a variant of RGBE) to store each color in 32 bits might work well.

    • It would be good to have multiple "types" of color. For example: XYZ for tristimulus inputs, a temperature and energy multiplier for blackbody radiation, and a pointer to a shared buffer for full spectral data. More color types could be added as needed. This would provide a lot of flexibility for specifying colors in various useful ways. Maybe even have options for different tristimulus spectral upsampling methods?

    • For shading normals, using the Oct32 encoding from "Survey of Efficient Representations for Independent Unit Vectors" seems like a good trade-off between size, performance, and accuracy.

    • For scalar inputs, probably just use straight 16-bit floats.

  • The camera-based dicing-rate heuristic is going to be really important:

    • It can't necessarily be based on a standard perspective camera projection: supporting distorted lens models like fisheye in the future would be great.

    • It needs to take into account instancing: ideally we want only one diced representation of an instance. Accounting for all instance transforms robustly might be challenging (e.g. one instance may have one side of the model closer to the camera, while another may have a different side of the same model closer).

    • It needs to account for camera motion blur motion blur in general in some reasonable way. This probably isn't as important to get robustly "correct" because anything it would significantly affect would also be significantly blurred. But having it make at least reasonable decisions so that artifacts aren't visually noticeable is important.

  • Having a "dial" of sorts that allows the user to choose between all-up-front processing (like Manuka) and on-the-fly-while-rendering processing (closer to what the C++ Psychopath did) could be interesting. Such a dial could be used e.g. in combination with large ray batches to render scenes that would otherwise be too large for memory. What exactly this dial would look like, and what its semantics would be, however, I have no idea yet.

Interactive Rendering

Manuka's shade-before-hit architecture necessitates a long pre-processing step of the scene before you can see the first image. Supporting interactive rendering in Psychopath is something I would like to do if possible. Some thoughts:

  • Pretty much all ray tracing renderers require pre-processing before rendering starts (tessellation, displacement, BVH builds, etc.). For interactive rendering they just maintain a lot of that state, tweaking the minimum data needed to re-do to the render. Even with a shade-before-hit architecture, this should be feasible.

  • Having a "camera" that defines the dicing rates separately from the interactive viewport will probably be necessary, so that the whole scene doesn't have to be re-diced and re-shaded whenever the user changes their view. That separate dicing "camera" doesn't have to be a real projective camera--there could be dicing "cameras" designed specifically for various interactive rendering use-cases.

Color Management

Psychopath is a spectral renderer, and I intend for it to stay that way. Handling color correctly is really important to my goals for Psychopath, and I believe spectral rendering is key to that goal. But there is more to correct color management than just that.

  • Handling tristimulus spaces with something like Open Color IO would be good to do at some point, but I don't think now is the time to put the energy into that. A better focus right now, I think, is simply to properly handle a curated set of tristimulus colors spaces (including e.g. sRGB and ACES AP0 & AP1).

  • Handling custom tone mapping, color grading, etc. will be intentionally outside the scope of Psychopath. Psychopath will produce HDR data which can then be further processed by other applications later in the pipeline. In other words, keep Psychopath's scope in the color pipeline minimal. (However, being able to specify some kind of tone mapping for e.g. better adaptive sampling may be important. Not sure how best to approach that yet.)

  • Even with it being out of scope, having a single reasonable tone mapper for preview purposes only (e.g. when rendering directly to PNG files) is probably reasonable. If I recall correctly, ACES specifies a tone mapping operator. If that's the case, using their operator is likely a good choice. However, I haven't investigated this properly yet.

  • One of the benefits of spectral rendering is that you can properly simulate arbitrary sensor responses. Manuka, for example, supports providing spectral response data for arbitrary camera sensors so that VFX rendering can precisely match the footage it's going to be integrated with. I've thought about this for quite some time as well (even before reading the Manuka paper), and I would like to support this. This sort of overlaps with tone mapping, but both the purpose and tech are different.

2018 - 11 - 13

A Different Approach

I've been away from Psychopath for a while working on other projects, but recently I stumbled upon a blog post by Yining Karl Li, entitled "Mipmapping with Bidirectional Techniques". In it, he describes his solution to a problem I've been pondering for a while: how to handle texture filtering in the context of bidirectional light transport.

The problem essentially comes down to this: texture filtering should be done with respect to the projected area on screen, but when tracing rays starting from a light source you don't know what that projected area is going to be yet. In Psychopath's case it's even worse because it applies not just to texture filtering but also to dicing rates for geometry. In Psychopath you can't even calculate ray intersections without a filter width. So this problem has been bugging me for a while.

Yining explores an approach to this problem that I've also considered, which is to have the filter width simply be independent of the rays being traced. In other words: screw ray differentials, just use camera-based heuristics. The benefit of this approach is that every point on every surface has a single well-defined filter width, regardless of where rays are coming from or how they're being generated. The down side is that there are circumstances (such as magnification through a lens) where those filters become too blurry.

These are all things I've thought about before, and I've gone back and forth many times about how I want to approach this. However, Yining's post also linked to a paper from Weta Digital about their renderer Manuka. And that sent me down a rabbit hole that has me reconsidering how I want Psychopath's entire architecture to work.

Tying Geometry and Shading Together

There are a lot of cool things about Manuka, but the one that stuck out to me—and the one that has me reconsidering a lot of things—is how they handle shading.

Generally speaking, ray tracing renderers do shading when a ray hits something. But Manuka takes a radically different approach. Manuka does all of its shading before rendering even starts, by dicing all geometry into sub-pixel polygons and calculating shading at the vertices.

If you're at all familiar with the Reyes rendering architecture, that should sound really familiar. The difference is that instead of baking colors into the geometry like in Reyes, they bake surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.

Honestly, I think this is genius. There's an elegance to essentially saying, "geometry and shading are the same thing". It's an elegance that Reyes had, but that I failed to reproduce in Psychopath—even in the early days when I was essentially trying to do path traced Reyes.

One of the fantastic outcomes of this approach is that the scene has a clear static definition regardless of what rays are being traced. This handily solves the bidirectional texture filtering problem, as well as Psychopath's related bidirectional dicing problem. It also, of course, has the afore-mentioned problem with magnification. But I think that's a pretty reasonable trade-off as long as you provide ways for people to work around it when needed.

Having said all of this, there are other trade-offs that Manuka makes that I'm not so inclined to reproduce in Psychopath. Specifically, Manuka literally does all of their dicing and shading up-front and keeps it all in memory to be traced against. I think their reasons for doing that are very sound, but that just doesn't seem interesting to me. So I would still like to explore a more dynamic approach, closer to what I've already been doing.

A New Architecture

So with all of that background out of the way, this is roughly the architecture I am now envisioning for Psychopath:

  • Scene data is effectively statically defined: dicing rates, texture filters, etc. are all determined independent of the rays being traced, and are therefore consistent between all rays.

  • Even though the scene data is conceptually static, it can still be computed dynamically, as long as the result is deterministic and independent of the rays being traced.

  • Geometry and shading are the same thing: shading is defined at the same points that geometry is defined, and should generally be sub-pixel in size.

The specifics of how these decisions are going to play out is still a bit unknown to me. But it's what I'll be working towards and experimenting with for Psychopath's next steps (albeit slowly, as my time is somewhat limited these days).

This is also exciting to me because in some sense this is getting back to Psychopath's original roots. I started the project because I wanted to see if I could merge Reyes rendering and path tracing in a useful way. I've made a lot of detours since then (many of them interesting and worthwhile), but fundamentally I think that's still the idea that intrigues me most. And I think this is a good step in that direction.

Lastly: a shout out to Johannes Hanika, who co-authored the Manuka paper, and somehow seems to be involved in most of the papers I've found genuinely inspiring over the last several years. If you're reading this, Johannes, I would love to buy you a beer!