Introduction

   Although not explicitly discussed, a previous blog on detonation modeling made use of multiple software packages to explore optical effects manifested by the strong pressure (P) and temperature (T) profiles of explosion shockwaves. Specifically, the article discussed the process of breaking down the continuous pressure and temperature profiles of a detonation into discrete individual shells and modeling each shell by a homogenous hemisphere with known refractive index. In turn, the refractive indices for each (P,T) combination were obtained through physics-based detonation modeling software.

   What wasn’t discussed, however, was why anybody would want to generate a synthetic scene in the first place. In a word, the answer is usually “money”. To illustrate, imagine that you’ve always dreamed of owning a 1974 Plymouth Barracuda but you’ve never been fortunate enough to afford one. Finally, the time comes when your finances align and you’re ready to make your dream purchase — the only decision left is the body color. The stipulation is that you can choose any color you want, but you only get to make the decision once. In an ideal world, you would get to try out multiple colors and ultimately pick the color you jive with most. But in reality, you may pull up pictures of the Barracuda in different colors, compare them side-by-side, and make your selection from those images.

   If that analogy fell flat, here’s another. Imagine you just bought a new camera that has incredible zoom, and you’ve written an algorithm to track a known satellite in orbit. As you take your first images, you realize satellites suffer from the pesky quirk of being symmetrical. As such, you’re having difficulty being sure which way the satellite is facing. If only the satellite would rotate slightly along the axis of its solar panels! Clearly, you cannot call the parent space agency and ask them to rotate the satellite 30 degrees along your preferred axis (please). Instead, it would be much cheaper and easier to simulate a camera’s imaging system, find a CAD model of the satellite you’re tracking, and rotate the satellite with respect to the camera! This idea is illustrated below.

Synthetic rotation of a camera relative to a satellite

   So what’s really going on here? In order to make a data-based decision, you create (or find) images of the scene you wish to see, because you simply cannot afford to buy you dream vehicle in each color (or as many paintjobs). The same is true for synthetic scene generation. It is often impossible or too expensive to create the scene we really want to see by using real objects, but it is much cheaper to generate similar images digitally to inform our real-world choices. 

Extending capabilities

   Although highly informative, the previous scene was fairly simple — there was no starfield and the satellite moved as a whole without rotating solar panels. But more terrestrial scenes can become incredibly complex. For example, consider the three scenes below: (left) an aircraft carrier, (center) a gas-emitting warehouse, and (right) a portable rocket launcher driving through the desert.

   The complexity of the first case becomes apparent almost immediately. Let’s stop to think about all the events happening — at the same time! — in the scene. In no particular order, you’ve got: (1) the constant churning of the ocean, (2) the occasional white-capped wave, (3) extremely large rogue waves, (4) the motion of the aircraft carrier, (5) the spinning RADAR system of the aircraft carrier, and (6) any aircraft that may be landing on the aircraft carrier over time. On top of all of this, your camera needs to be correctly placed, the sensor needs to be properly defined, and the the camera’s position needs to account for the motion of all the elements in the scene — is the aircraft carrier going to pass through the scene, or will your camera follow the aircraft carrier as it passes along?

Ray tracing of a multitude of scenes

    Even this scene can look like child’s play compared to the center scene. Why? Because the material properties are arguably much more complex than that of the aircraft carrier. For example, no matter how hard the wind blows in the maritime scene, the color of the carrier, water, etc. will never drastically change. But in the gas-emitting warehouse, the winds can shift rapidly and funnel down the canyon (or not). In turn, shifting wind patterns will change the local density of the gas being emitted by the warehouse. Furthermore, depending on the spectral properties of the gas, the user-defined camera needs to be able to see the wavelength of light scattered by the gas. Otherwise, the camera would not see a gas being emitted to begin with, and the user may begin to question the mechanisms being modeled in their simulation.

   The third scene is likely the most tame of the three scenarios. As the missile launcher drives, the wheels are the only element in the scene moving (assuming wind in the scene is not strong enough to make the plants sway). In turn, the rotating wheels cause the launcher to move, driving along the road. But the road and plants within the scene do not change, unlike the waves surrounding the aircraft carrier.

   It’s the variety between each of these scenes, however, that is breathtaking. By downloading three CAD models, three completely different scenes can be easily modeled in a much shorter time (and at a minutia of the cost!) than it would be to have each of these assets exist in the real world and capture images of them in similar states.

Resources

   In order to create such scenes, then, one would need to obtain ray tracing software (ex. Blender, DIRSIG, etc.), high-fidelity CAD models, and some way to assign or attribute each CAD model with proper materials within the scene. Luckily, the most visual aspect of this (the CAD models) are readily available at websites like NASA, GrabCAD, TurboSquid, or any other number of CAD model websites. For what it’s worth, DIRSIG is also free after taking a training course to learn how to properly use their software.

Conclusion

   The ability to accurately create and model a synthetically generated scene goes beyond making pretty pictures. It enables us to test out new hardware from camera sensors to novel lens elements. It enables us to calibrate our look angles before arriving at our experiment, and it decreases our chance of blowing a very costly data collect. Synthetic scene generation is a skill that not only produced countless pretty images, but it has been a skill that has personally served me well on more than one occasion even when my initial job description did not ask for it.

   More importantly, scene generation is a skill that has enabled me to convey messages and concepts in single images and video, many of which drove home the point I was trying to make, and sharing my vision with the customer when words simply would not suffice. Given the abundance of free resources surrounding this topic, scene generation is a skill I would highly recommend others learn.