Introduction

   In a previous blog post, we briefly discussed ionospheric physics, specifically focusing on the ionosphere’s composition and fluidity. One particularly difficult aspect about the ionosphere is that despite being a massive churning shell of plasma looming over our heads, it is absolutely invisible to the naked eye. As scientists, we are able to wield radio frequency (RF) emitters as a sort of measuring stick. By recording when we launch an RF signal and when (and where!) we detect the same RF signal, we are able to estimate the geometry of the various layers the RF waves must have reflected off of in order to reach our receiver.

   The data we collect during these experiments comes together in a two-dimensional plot called an “ionogram”. A sample ionogram is illustrated below (left); with an x-axis of Frequency [MHz] and a y-axis of Range [km], every line in the image tells us something about the environment through which our RF wave travels. After removing the noise (the vertical and horizontal lines), bits and pieces of our signal — the “nike swoosh”-shaped curves — can be extracted as illustrated below (right).

Raw data vs. cleaned ionogram

Digging deeper

   But the more you think about the RF environment, the crazier and more complex things become. For instance, the signal illustrated in the two-dimensional plots above actually change in time; as we continue to send and receive signals, the location, length, and shape of the signals will change as a function of day, time of the year, solar cycle, location, temperature, and many other parameters. So when we step back to look at the full picture, we’re actually dealing with a hypercube of data described by three axes: Frequency, Range, and Time.

   To be fair, the complexity involved in processing the same two axes (Frequency, Time) has been the study of many scientists for years. In doing so, signal strength correlations can be drawn to the Sun’s solar cycles, large solar flares, planetary magnetic fields, and other cosmic phenomenon. In fact, with more care and algorithm development, each of the signals in the ionogram above (right) can be discriminated, interpolated, and grouped based on known physics of RF wave propagation (below, right).

Additional ionogram processing

 And once these connections are made, additional information about the ionosphere’s geometry becomes apparent and we can further tweak and improve our models of the ionosphere’s topology. Albeit a very rewarding and interesting journey, the loop of gathering data, cleaning data, updating the ionosphere model, and refining data collection techniques can become a vicious cycle. That cycle can also blind us from understanding that there may be another way to look at the problem altogether with data we’ve already collected.

Changing perspective

   Check out the illustration below, borrowed from Peter Beshai’s website. Although the simulated shape is solid, the message “conveyed” by the shadow changes drastically depending on how the shape is illuminated.

Godel Escher Bach Letter Cube

In a similar way, we can look at difference “faces” of our data cube to gather a different interpretation of the ionogram data (or any other similarly complex data). When doing so, it’s not just enough to examine one face of the cube; keep in mind a lot of our ionogram data has a very low value (~0), so it may be easier to visualize if we squash the depth dimension via summation (integration).

   Returning to our description of the ionogram hypercube we’ve collected, we identified axes of Frequency, Range, and Time. The images we’ve already seen show Frequency on the x-axis and Range on the y-axis, meaning we’ve integrated Time on the z-axis axis. Physically this makes sense, because ionogram collects are typically taken over a time period, but we only get a single two-dimensional plot.

   Looking at another face, we might place Time on the x-axis, drifting from left to right as time progresses. If we keep Range along the y-axis and integrate along Frequency, we might see something that looks like the following.

Looking along a different face of the hypercube

   But what does this mean? What does it tell us? First and foremost, we notice the strongest signals are near the bottom of the Group Path VS Time window — this makes sense, because the strongest signals often come at the lowest part of the ionograms and tend to represent reflections from the the D or E layers of the ionosphere. Reading the image left to right, we also see that the signals are not constant in time. Instead, they tend to wiggle, and this also makes sense: as conditions of the atmosphere change, the shape of the ionosphere is going to change and also force layers of the ionosphere to increase or decrease with height. In turn, the Group Path (or Range) will change because the reflectance conditions have changed.

   And there’s one more hidden caveat to this dataset. When it was recorded, both the transmitter and receiver were stationary with respect to each other. If this were not the case, the Group Path would vary considerably more as time passes. In this case, changes in the Group Path could actually be correlated to how the distance between the transmitter and receiver was changing over time. For instance, if the Group Path were increasing over time, the transmitter and receiver could be assumed to be moving away from each other, and the rate at which they were moving away from each other could be gathered by the slope of how the Group Path was changing over time. That is, we could back out physics-based intuition about the movement of the apparatus by looking at the ionograms! But the message only becomes very clear if we look at the data hypercube from a perspective other than the conventional Group Path VS. Frequency plots we began the article discussing.

Conclusion

   Looking at data through another perspective can often be a rewarding experience. If no new insight is gained, it at least offers the user the chance to sanity check their initial interpretation. Furthermore, consider that the same concept can be extended to other datasets as well. For instance, hyperspectral image data also tends to form a hypercube of X and Y position (often indicating physical position of an object in a scene) and a Z axis that corresponds to hundreds of wavelengths. Ask yourself, what would you see if you looked at a hyperspectral data cube along a different face?