Over at /g/BITX20/topic/ubitx_v6_specifications/82557868 I saw some folks wondering how in the world something could be damaged by undervoltage, since as we all know, when things are turned off, they're at 0V. It's pretty obvious to most people that high voltage can cause all sorts of damage - sparks, heat, and the "magic smoke" - but the ways that undervoltage causes damage are a bit more subtle. In this rather lengthy post, I'll hopefully provide enough background to explain why undervoltage can be problematic, and give a couple scenarios where it could cause damage. While I do include a decent amount of detail, this is by no means meant to be a comprehensive guide.
TOO LONG; DIDN'T READ
Undervoltage can cause undefined behavior and partial-conductance, which can put your device in bad states and operating conditions, temporarily or permanently.
THE ANALOG NATURE OF DIGITAL LOGIC GATES
Digital logic assumes finite states, usually binary 0 or 1. In reality, all signals are actually analog approximation, usually with 0V for 0, and then some higher voltage for 1 - it could be 1.2V, 1.8V, 3.3V, 5V, etc., but let's assume in our system it's the Arduino 5V IO standard. With these two values in mind, consider an input voltage of 2.5V: is that a binary 0, or a binary 1? The answer, unsurprisingly, is "neither". However, that raises some serious questions. What should the system do if it encounters a 2.5V signal? The output needs to be a 0 or a 1, so we shouldn't output 2.5V, but picking 0V or 5V at random would mean our output data is random, which isn't what we want either. The only fool-proof solution is to "not do that" - don't ever send a signal that isn't a 0V or 5V. But if we're not allowed to send anything between 0V and 5V, then we'd never be able to change between 0V and 5V, since pesky physics says we can't instantaneously change voltage. Rather, we have to change over some (possibly very small but) non-zero amount of time. So if we want our system to be able to change (and thus actually do stuff), but we also don't want any voltage except 0V or 5V, then we need a way to ignore all the middle voltages, and only look at the 0V and 5V signal states. That's where propagation delay and clocks come in. The clock on a digital circuit provides a way to only look at the input voltages at certain times. As long as the time it takes to switch voltages from 0V to 5V and 5V to 0V are shorter than the clock period, we know that when the clock signal comes, our inputs will all look like either 0V or 5V, just like we wanted! Great!
Let's suppose the input signal to our system that is only 4.9V, not 5V. This could be due to a manufacturing defect, or some resistive loss on the input, but regardless, 4.9V looks a lot like 5V, so we probably want to interpret it as a binary 1, and output a 5V signal. But we said before that 2.5V is undecidable, so there's got to be some threshold at which we can confidently call a signal binary 1, and another threshold we can confidently call a signal binary 0. We call the highest voltage at which an input will definitely be considered a binary 0 "Voltage Input Low", often shortened to V_IL. We call the lowest voltage at which an input will definitely be considered a binary 1 "Voltage Input High", often shortened to V_IH. If we consider an inverter, the graph of inputs vs outputs usually looks something like this, assuming a fixed nominal system voltage:

You'll note that the V_IL and V_IH are somewhat conservative. This is intentional. We expect some process manufacturing variation, so we define V_IL and V_IH not so that they're mathematically the most pleasing (e.g. <2.5 is a 0, >2.5 is a 1), but rather so that in the worst case - if all our expected manufacturing variations all push the curve in the wrong direction - inputs at V_IH and V_IL will still produce the correct output signals. Cool! Our system now works nominally with 0V and 5V signals, but also tolerates some amount of "real life" variation.
UNDERCLOCKING
Next suppose you provide your system with 3.3V instead of 5V, not as the input, but as the supply voltage. Now none of the signals are 5V, so are they all binary 0's? Well obviously not - we can define 3.3V as the new value for binary 1, and we're happy, right? Not quite. Digital logic is actually a bunch of analog transistors running to saturation to produce those nice high 5V 1's and low 0V 0's. If we assume they are MOSFETs, then our logic gate's input voltage allows current to flow, which charges up the gate capacitance of the next logic gate. No problems so far, but unfortunately (in this case), transistors operate differently depending on their gate voltages and the difference between their drain and source voltages. You've probably seen a graph of gate voltage vs current that will pass through a MOSFET like this:

Why do we care? Because what it tells us is that if our system voltage is lower, both V_GS and V_DS will be lower, which per the graph means that our drive current will also be lower. The drive current is what determines the propagation delay between receiving a new input (V_GS) and achieving our desired output (now 0V or 3.3V), since we need to charge the next logic piece's gate capacitance. If we have less current, it takes longer to charge the next gate, so our propagation delay increases. If our propagation delay increases too much, we may take longer to achieve our desired output than it takes for the next clock signal to come! Note that there's an easy fix for this. If we increase our clock's period, decreasing it's frequency (known as underclocking), then we can give ourselves more time for the signal to propagate, and thus make sure we reach our desired output before the next clock signal arrives. Crisis averted! But do we actually have control over our clock speed? In a lot of cases, we do at design time, but not while operating. For instance, if we picked a 16MHz clock suitable for 5V, but then only provided 3.3V, then 16MHz may be faster than the system can actually propagate the full signal swings, so we may be stuck getting bad values at 3.3V.
WHAT'S THE WORST THAT COULD HAPPEN?
We've discussed some issues with signals being in mid-way points, and saw that it can make it difficult to decide on a "correct" output choice, but haven't examined what sorts of problems it can cause. For some applications, like in medical equipment or a guided missile, it's easy to understand that a single miscalculation could literally be life-and-death. Okay, fine. But we're dealing with hobby radio stuff, so what's the harm in a wrong bit from time to time? Just power cycle the radio and we're back to business, right? Often times, in practice, that's probably right. However, consider what it means to run a program. The processor is constantly changing many bits at once, and in most consumer and hobby grade devices, the processor assumes that every operation it does is done correctly every time. If a bit that is supposed to be a 1 isn't quite high enough (less than V_IH), or a 0 isn't quite low enough (more than V_IL), then what happens next is largely up to chance.
For instance, if the program counter reads a 1 where it was supposed to be a 0, then you've suddenly ended up at a random location in your code, and will start executing who knows what instructions, which will re-interpret your stack data in any sort of manner. Often times this will lead to invalid instructions being attempted and crash, but it could also end up being more subtle, where the program still runs, but is now operating on the wrong data. A mis-interpreted bit in a register could cause the program to activate some undesired behavior, like turning the transmitter circuit on when it shouldn't be, or cause us to save the wrong configuration data. Since we don't know which bits are more or less likely to fail first due to lower voltage, it's impossible to say exactly what failure mode we'd see.
In a lot of scenarios, turning the system fully off, then re-powering with sufficient voltage will recover. However, for systems with non-volatile memory (e.g. flash), an undervoltage system voltage can corrupt or destroy data, rendering the system inoperable or "bricked". Similar to our logic gates above, non-volatile memory requires certain electrical parameters in order to store values that are "definitely 0" and "definitely 1". If we apply insufficient voltages when writing to these memories, we can end up with bits stored as in-between states, that aren't clearly readable as 0 or 1. When reading back properly stored data, insufficient voltage can corrupt our read process, returning incorrect bits. If this memory happens to be our program data, then incorrect bits could make it impossible for us to successfully run our program or boot. In particular, if this program data was written by the factory, or happens to be one-time-programmable, and we don't have the ability to reprogram the memory from an external programmer, we may now have "bricked" our system for good.
Finally, to see some real potential for damage, let's consider a typical CMOS inverter gate:

When V_in is 0V or 5V, one of the FETs is fully on, and the other is fully off, so current can only flow to or from V_out. However, if our system voltage is too low, we can get into a state where both FETs are consistently partially-on, meaning there is a direct path from the system voltage, through the FETs, to ground. Depending on the nature of the FETs, the current flow could be enough to fry them, or the resistors or wires they're connected to. Even if the FETs themselves survive, with enough gates in these unstable states, our system could end up oscillating wildly from small system voltage fluctuations, and/or draw significantly more current than it would during normal voltage operation, potentially damaging other components, like the power supply, or even indirectly damaging adjacent parts due to excessive heat generation. Note that this particular problem is not specific to logic gates in a processor, but also applies to similarly-configured push-pull transistor setups, like H-bridges.
TURNING OFF AND ON
To power a device on or off, there must be some time during which the system voltage is greater than zero, but not yet in the "safe" area, so why don't digital devices corrupt, fail, or self-destruct constantly? There's no one single answer, because different devices have different solutions to this problem. Some devices have built-in circuits specifically designed to detect the supply voltage, and only enable themselves once the voltages are high enough to be safe. An example of this is the brown out detection system in the Arduino. Many devices have requirements for how long their supply voltages are allowed to take to transition from 0V to the target (e.g. 5V). Just like the propagation delay with periodic clocks, if the supply voltage comes up fast enough, all signals can be "locked in" to good states before any uncertain behavior regions have a chance to corrupt bits, or significant current can pass through the partially-on FETs. A related solution is to not provide the clock signal until the supply voltage has been up for a certain amount of time. This ensures that no logic gates are required to read input signals until those input signals have had time to stabilize.
For powering down, similar rules apply. The device might require voltage to drop from the nominal system voltage to 0V within a certain amount of time, or to stop the clock input prior to removing power. It may also require that potentially unsafe operations, like writing to non-volatile memory, be stopped some amount of time before removing power. In more complicated systems, we may need to remove power first from the "dangerous" parts, like the high voltage from a motor driver for an antenna rotator, or disabling writes to non-volatile memory, before removing power from the respective controller unit. In this way, while there may be a few mis-interpreted bits as the system powers down, they won't be able to cause dangerous or long-lasting behavior.