We can take over control of a completely isolated—air gapped—computer from a distance. To do it, we used a very old idea: that flashes of light can disrupt electronics. The effect is controllable enough to execute arbitrary code on the target system, but a long way from being able to do it to an ordinary laptop. It solves a problem ignored by almost every report of defeat of air-gapped security: the problem of initial access. Think of Tom Cruise dangling from a wire in a computer room in the first Mission Impossible movie.
This article expands on the research paper presented at USENIX WOOT’24 [1], which I collaborated with Kasper Rasmussen of the University of Oxford.
Every covert channel paper ever always seems to begin with the same implicit assumption:
“Grant us the inside man.”
What if you could do glitching attacks [2] without access to the power supply? What if you could do laser fault injection [3, 4, 5] without boiling nitric acid or grinding wheels, and no longer limited to a microscope stage?
We found this vulnerability because we went looking for it. Building on the long history of side channels [6, 7, 8], it seemed reasonable that information might be able to flow into a system instead of leak out of it.
We have not reached the stage of a practical attack yet. We can show remote code execution, but only on haywired hardware. Our's is a measurement paper.
By way of analogy, if side channels are like mind reading, this is mind control.
The vulnerability of inadvertently photosensitive electronic components directly connected to circuits carrying sensitive information is as old as the IBM 701 mainframe computer (in 1952) and as current as the Raspberry Pi 3 in 2016 [9, 10, 11].
Things went pretty well at the dedication, until the photographers started taking pictures of the hardware. As soon as the flashbulbs went off, the whole system came down. Following a few tense moments on the part of the engineering crew, we realized with some consternation that the light from the flashbulbs was erasing the information in the CRT memory. Suffice it to say that shortly thereafter the doors to the CRT storage frame were made opaque to the offending wavelengths. [9]
We began from only being able to crash the system, and extended it to take over control remotely. Our approach works, in general, on shared communication buses, and depends entirely on where the hardware designer decided to put components. The attacker needs line-of-sight and detailed knowledge of the hardware and software running on the target.
It’s not the same as a classical covert channel, because there is no need to assume the existence of an insider threat; further, it eliminates any need to install malware on the target system ahead of time [12, 13]. It’s not the same as optical fault injection, which is done under a microscope on decapped chips [3, 4, 5]. Finally, it is not the same thing as Light Commands [14, 15, 16], because we can take over control of a computer that wasn't already listening for a signal.
"Critical Cybersecurity Vulnerability Found in Light-Up Dog Toys"
So far, the vulnerability is exploitable on only one commercially available device we've found, a 5 mm RGB color changing LED commonly found in light-up toys [17]. Hit it with a fast pulse of infrared at 900 nanometers, and it will reliably reset the sequence to red. The chip inside is believed to be a CDT3447 [18] and it is not known whether the entry point is one of the RGB emitters or the silicon substrate of the chip itself, which is transparent to infrared wavelengths, as in the Raspberry Pi.
So the context here is status indicator lamps (Figure 1).
If you shine a really bright light on one of them, all that energy has to go somewhere. Would you believe it flows backwards into the electronics? Effects range all the way up to remote code execution. There are at least two different physical effects at work: a photoelectric effect and a photoconductive effect. It takes a fairly powerful laser to do it, because in addition to information, you are also supplying all the energy needed to run the electronics in reverse.
That last point is important. The energy levels involved here are high. These are absolutely not eye safe lasers. We take elaborate precautions around them including interlocks, protective goggles, warning signs, and radiation shielding. Be careful.
M5 is an imaginary computer with an extremely small instruction set, smaller even than RISC-V. In Figure 2, Table 1 shows the complete instruction set of this imaginary computer; on the right, Figure 17 shows which opcodes are reachable from other opcodes if you can only change binary zeros to ones.
The photoelectric effect lets us change a binary 0 to 1, but not to go the other way around. The photoconductive effect lets us change a binary 1 to 0—but not the other way around. This immediately suggests that the two effects might be used in concert, but there’s a problem: typically, electronic components respond only to one effect or the other, but not both.
This places some fascinating constraints on the attacker. Those binary numbers in Table 1 are all the possible assembly language instructions. If we assume the attacker can only change a binary 0 to 1, but can never go the other way around, then some of those instructions are reachable from other instructions, as you can tell from the graph on the right.
The attacker might be able to change a load instruction into a branch, or an add into a subtract, but can't change a branch into a store, or a divide into a compare, so we say the attacker can execute arbitrary-ish code. It’s a kind of weird machine. It’s easy to redirect a memory access up into high memory—just set the high bit of the address—so our attacker patiently watches the instruction stream go by, picking and choosing opcodes he can alter into the instructions he wants, slowly building up the desired program in high memory. Maybe it's not quite the program the attacker wanted, because certain bit patterns were unreachable, so the first thing the arbitrary-ish code does is run fixups on itself. Finally, we redirect a branch, and it's game over.
The attacker must be very careful not to crash the running program, because its memory accesses are needed for the subversion. If the target computer ever crashes, access is lost.
M5 is a fairly simple computer, but it’s complex enough to illustrate the difficulty of the attack. It implements the instruction set shown above. It’s a four-bit CPU, load/store architecture with one register called the accumulator, which is visible on the front panel—which is important. It's not RISC-V because we wanted the instruction reachability graph above to be small enough to be grasped, not a plate full of spaghetti.
Think of it as a factory automation controller, reading temperatures and pressures, turning motors on and off to open and close valves in a chemical process plant. Normal operation is shown by a regular scanning pattern on the accumulator lamps.
The attacker begins by watching the target system for a while, specifically the accumulator display (video of the attack). Before the attack can proceed, the attacker needs to learn the timing and establish a phase lock on the internal state of the CPU. This is possible because the accumulator display always changes at a particular microcode cycle, which might be different for different instructions that can change the value in the accumulator, but the attacker can tell the difference between those instructions by observing the direction and magnitude of the change. We assume the attacker has complete knowledge of the computer's architecture and the program that is running, but needs to synchronize timing with that program, right down to the microcode cycle. It takes about ten iterations through the program (sweeps of the accumulator display shown in the right four LEDs) to get a phase lock.
Timing is absolutely critical. Once the attacker has got a phase lock, the lasers begin firing, but at the green bus LEDs in the middle, not at the red accumulator LEDs on the right. This serves to illustrate the general principle that you’re not always watching the same LED you’re shooting at. It only takes a few seconds, but the regular scanning pattern on the accumulator LEDs is observed to have changed. It’s not doing the same regular left-to-right scanning pattern as before. That’s because the computer is now running a completely different program, one that we put there through the status indicator lamps.
Analysis begins with the electrical schematic of the target device, looking for potentially susceptible components in electrically interesting places. This is another of the constraints on the attacker; we can't choose where LED indicators are or how they're connected, and that determines what the attacker can do. Not every indicator will do something useful if reversed.
Figure 4 shows one electronic component of interest. Light emitting diodes are interesting because they are a naked P–N junction, the basic building block of solid state electronics. The way they work is simple: you put electricity in, and light comes out—pink light, in this case. But there's a general principle in physics that any process that runs in one direction can usually be made to run in the other direction if you supply enough energy to make up for the difference in entropy.
When you put light into an LED, it comes out as electricity but in the opposite direction from the usual current flow that makes it light up. This violates important assumptions made by the circuit designer. We can force the voltage on a communication bus low—or high, depending on how the LED is connected—by hitting it from a distance with a laser.
If the eyes are the windows to the soul, then LED status indicators are a window into the electronics.
The first question everyone always asks is, how far away can you do it? And the answer is, we don't know. Our's is a measurement paper, not developed into a practicable attack. We aimed it once, and then bolted everything down so it can't drift out of alignment. It's not my problem to figure out how to aim this thing. It's my problem to measure it.
So we built this infernal machine (Figure 5). It consists of a pair of linear actuators, the x axis one bolted crosswise atop the carriage of the y axis one, which lets us raster scan a focused laser across any desired component on the circuit board, stopping every fiftieth of a millimeter to measure the voltage on the communication bus. There’s a live I2C bus under there, and it’s communicating the whole time. Each linear actuator has a stepper motor, and there’s a bank of relays that lets us automatically vary the bus voltage and pull-up resistor value. Data collection is automatic, transmitted out by a serial connection over USB. The whole thing is controlled by an Arduino. I like Arduino; everything is C++, the hardware is bulletproof—it’s relatively difficult to accidentally fry the I/O pins—and it’s cheap.
Here's what the raw data from one of those scans looks like (Figure 6). You're looking straight on at the LED; the big blue square is exactly five millimeters across. The curvy arc on the right hand side is an artifact of the dome lens on the LED, we think.
We can draw isovoltage contours (above, center) to begin to visualize how the voltage on the communication bus varies as the laser scans across the face of the LED. We're looking for the physical location where our laser can force the voltage on the circuit to go below the logic threshold.
The only one we're really interested in is the 2.0 volt contour line (Figure 6, right) because that's the value we found by experimentation to be the critical value. This is the active area. Any hit by the laser inside that region alters the value of a bit. You can kind of see the outline of the square chip in the LED, or maybe that's just my imagination.
Anything outside that region is a miss. The size of the active area, in square millimeters, is a direct measure of how hard the target is to hit.
But there’s a problem.
Blinky lights are cool, but the hardware designer might not have put one where we need it, on the communication bus. And you can't shoot lasers at an LED that isn't there.
But I don't want to give you the impression that it's only status indicators, or only LEDs; the definition of “accessible P–N junction” is wider than that. Figure 7 shows a microphotograph of the silicon chip inside an electrostatic discharge protection (ESD) component like the one at the lower left—which are found on shared communication buses, because they're susceptible to static electricity—and are even more sensitive than LEDs.
Here you can see the results of a lot of repeated runs against one of those. Notice how the active area, in red, gets larger as you move down the chart, and as you move toward the right. As the system voltage decreases, from 5 V TTL to 3.3 V CMOS, through 2.5 V low-voltage CMOS, by the time we get down to 1.8 volt LVCMOS, on the bottom row, we can force the value of a bit against any reasonable value of pull-up resistor. The values shown here bracket the range of pull-up resistor values, from 1 kΩ (fast, but power-hungry), to 2.2–4.7 kΩ (typical), to 10 kΩ (slow, but suitable for low-power devices that need to conserve battery).
Now we have a model that makes testable predictions. For any combination of LED color, laser wavelength, logic family, and pull-up resistor value, we can predict if it's reversible.
Note that the active area on the ESD protection component is quite a bit smaller than an LED. The chip is only about 125 microns across, and it’s off-center in the gap. So it's a challenging target to hit.
It’s not isomorphic, either; Figure 8 shows the effect of elliptical beam axis rotation on the size of the active area. Recall from the microphotograph that the silicon chip is buried deep in a narrow slot between metal electrodes. Solid-state lasers emit an elliptical beam (corrected to circular with optics in more expensive lasers than the ones we have) and the semimajor axis of the elliptical beam pattern interacts with the narrow slot in interesting ways.
By the way, LEDs tend to respond best to short wavelength visible lasers, but we found the most effective laser wavelengths for silicon ESD protection components to be in the invisible infrared, outside visual range. This means the attack is stealthy.
This video shows a working proof of concept, running on a live I2C bus. The controller is trying to write the message NORMAL OPERATION on the display; the laser is trying to write the message PROOF OF CONCEPT. The laser and the bus controller are fighting over the bus, and the laser is generally winning.
Our's is a measurement paper, not a practical attack. It opens a whole new area of reverse side channel analysis, extending the concept of glitching to ranged attack. The bar is set: remote code execution. But what did we overlook? How far away is practicable? Given a 50 W source at 405 nm, if you need to put 5 mW within a 5 mm circle at the target, that gives you 40 dB of link budget, for atmospheric attenuation, refraction through window glass, and angle of incidence.