Our blog post on synthetic scene generation discussed the power of modeling realistic objects like terrain and vehicles, applying material attribution, using the objects within a scene to simulate physics of a highly specific scenario. A second blog article examined the many ways to process and analyze an evolving multi-dimensional dataset — by looking at a hypercube of data from a different “angle”, we saw that we could develop additional intuition about the data. Furthermore, the intuition we gain in our new perspective could then be used to further inform other decisions or analysis we were already planning to execute.
This blog post attempts to combine both concepts. Using a simulated gas plume, we’ll examine how a highly complex and evolving dataset must be looked at from different angles in order to gain a more complete picture of the scene we’ve developed.
Viewing the voxels
Consider a highly simplified situation like the one shown above. The three-part image illustrates an unmoving warehouse on an infinite plane, and the warehouse is emitting a plume of gas. Computationally, the gas is modeled as a voxelized plume that evolves over time: the earliest time is the left-most image, while the latest time is the right-most image. Furthermore, the plume is driven by wind strength, direction, and duration. For the time being, imagine that the plume outflow rate, wind speed, and wind direction are constant.
How is such a simulation possible to begin with? Ultimately we need two components: a plume modeling component and a ray tracing component. Interestingly, DIRSIG can handle both of these capabilities, combining its OpenVDB plume modeling plugin with its native ray tracing and visualization capabilities. Although the initial plume may seem ugly, keep in mind that we’re looking at the voxelized view of the plume — a view where each voxel (or three-dimensional pixel) is completely opaque and white where the gas molecules are calculated to exist. In a 3D coordinate where the pixel does not exist, no white block is shown.
Voxelized pixels are just one way of looking at the simulation results. If we assign chemical properties to each pixel (ex. gas density/voxel) and apply ray tracing, the scene becomes much more accurate and interpretable. The difference between a voxelized view and the ray tracing view are illustrated above; as you’ve likely guessed, the voxelized view is shown on the left, and the ray traced view is shown on the right.
Computationally, however, it’s much more difficult to work with the physics model, so for the moment we’ll turn back to the voxelization paradigm. Consider a single stagnant plume isolated from its environment. The three portions of the image illustrate the plume from three different viewpoints against (left) the XZ plane, (center) the YX plane, and (right) the YZ plane. The red dashed line passing through the plume illustrates a single line of sight, common across all three viewpoints. If it helps you picture the plume in your mind, think of the left-most image as the “front” view (with the plume coming towards you), the center image as the “top-down” view, and the right-most image as the “side” view as the plume drifts left-to-right into the wind from a ground-based factory resting in the XY plane.
With a line of sight (LoS) defined, the particle density through plume can be plotted along a single axis for each dimension through which the line travels. For instance, if gas density were plotted the LoS’s X-components, the left-most plot of the following illustration would result. Keep in mind that as we do this, the line will seldom hit on (or very near) a gas atom; when this occurs, though, we’d expect it to happen somewhat periodically, and we’d also expect the density count to jump at these locations (even though when integrated over the total nearby volume, the overall gas density is relatively lower) — this is exactly what we see in the left-most plot.
Similarly, the density of the gas plume along the LoS in the Y-direction (traveling left to right) follows a trend we can find in the center image from the previous illustration. In the previous image, the line begins near the origin, experiences low density, then hits a “wall” of higher gas density where the gas density suddenly increases; this is followed by a period where the density greatly decreases, then increases again dramatically. Continuing to trace the LoS to the right, the line passes through a second pocket of lower density before climbing one final time and staying high until the line terminates. This is the same line plotted in the center image of the following illustration. Again, we see the periodic spikes in gas density where the line travels between the layers of gas molecules. From this data, we can extract the peaks and interpolate between the peak points to obtain the green line
The same analysis could be done for the third plot illustrating the Z-components of the LoS gas density, but will be excluded here for brevity.
Applications to ray tracing
So far, we’ve discussed plume evolution and size characterization over time. We’ve talked about voxels and gas densities, but we have yet to discuss why any of this matters in terms of scene generation, simulation, or analysis. To illustrate, consider that the plume itself could comprise any gaseous material. For instance, some warehouses with ‘smoke stacks’ produce steam, pumping the steam into the atmosphere. Of course, this is made mostly from water and does not harm the environment or people living nearby.
But other unfriendly products can be processed to produce much more lethal biproducts such as hydrogen fluoride (HF) gas. Like any other gas, the molecular composition of HF has a characteristic absorption curve that varies with wavelength (illustrated below). Within the plot, I’ve added two arrows at two wavelength values. The first arrow indicates a point of strong molecular absorption, and the second arrow points at a nearby wavelength value at which the molecular absorption is nearly non-existent.
, Why does this matter? Because the spectral response of the camera’s sensor determines what the camera “sees”; in turn, the image the camera sees determines the image that is processed by our image processing algorithm. So if our sensor is tuned specifically to the second indicated wavelength, our camera will hardly see any plume at all. But if the sensor is tuned to image the first wavelength, the molecular absorption of an HF plume will cause the plume to prevalently stand out against the scene’s background, no matter how cluttered. We can even use background estimation to isolate the plume from the unmoving background to better analyze its spatiotemporal evolution.
Gaseous plumes and plume analysis have always fascinated me. Much like a fire or a view of the ocean, the everchanging nature of gas plumes is something I’ve found is both relaxing and intriguing. On the one hand, it is oddly comforting to watch a gas plume sway and dance with changes in its surroundings. On the other hand, they can be incredibly difficult to capture with a camera and process with image processing algorithms.
In this brief discussion, we’ve touched on how plumes can be simulated in simple circumstances (e.g., infinite flat planes) and gradually build to more complex scenarios (e.g., an HF-producing warehouse in a realistic valley with wind funneling). We’ve discussed the different ways in which we can look at a plume, illustrating both a gas density view and a voxelization view. We then touched on moving beyond the gas density perspective to consider the specific gas species we want to image/analyze, and how its spectral components are based on the molecular composition of the gas itself.
All in all, plume analysis can be very complex and time consuming. it can also be very rewarding when you finally begin to grasp the truth nature of such a fickle phenomenon.