The new AI chip architecture is stackable and programmable, allowing for the replacement and expansion of current sensors and neural network processors.
Think of a more environmentally friendly future in which smartphones, smartwatches, and other wearable technology aren't constantly replaced with newer models. Instead, they might be updated with the newest sensors and processors that would simply snap into a device's internal chip, similar to how LEGO bricks would be added to an already-built structure. Such reprogrammable chipware might maintain equipment while lowering our electronic waste.
With a stacking, customizable artificial intelligence chip that resembles LEGO, MIT engineers have now made a step toward that modular ideal.
The architecture consists of alternating layers of sensor and processing components, as well as light-emitting diodes (LED) that enable optical communication between the chip's layers. Other modular chip designs transfer signals between layers using traditional wire. Such extensive connections make such stacked systems non-configurable since they are hard to break and rewire, if not impossible.
In the MIT design, information is sent through the chip using light rather than actual cables. In order to add additional sensors or modern CPUs, for example, the chip may be changed with layers that can be swapped out or added.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” according to MIT postdoc Jihoon Kan. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
The researchers are keen to implement the idea into edge computing devices, which are self-sufficient sensors and other electronics that operate without the aid of any centralized or distributed resources, such supercomputers or cloud-based computing.
“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” the MIT associate mechanical engineering professor Jeehwan Kim explains. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”
The team's findings were released in the journal Nature Electronics on June 13, 2022.
Lighting the way
Currently, the team's design is set up to do simple image-recognition tasks. The team's previously created "memristor" arrays, which collectively serve as a physical neural network or "brain-on-a-chip," are used to overlay image sensors, LEDs, and processors constructed from artificial synapses. Without the use of additional software or an Internet connection, each array may be trained to analyze and categorize data directly on a chip.
The researchers used artificial synapse arrays and image sensors to create their novel semiconductor design. They trained each of the arrays to identify the letters M, I, and T in this example. The scientists created an optical system between each sensor and artificial synapse array to enable communication between the layers without needing a physical link, in contrast to the typical method of relaying a sensor's signals to a processor through physical cables.
“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” according to MIT postdoctoral fellow Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”
The team's optical communication system is made up of coupled photodetectors and LEDs, each of which is covered in tiny pixels. Data transmission to the following layer is done by LEDs and photodetectors, which together make up an image sensor. In order to classify a signal based on the pattern and intensity of the incoming LED light, an artificial synapse array and another layer of photodetectors are stimulated as soon as a signal (for example, an image of a letter) reaches the image sensor. This is accomplished by the image's light pattern encoding a specific configuration of LED pixels.
Stacking up
The group created a single chip with a computational core that was roughly the size of confetti and was 4 square millimeters. Three image recognition "blocks" are layered on the chip, each of which includes an image sensor, an optical communication layer, and an artificial synapse array for categorizing one of the three letters M, I, or T. They then projected a pixelated picture of random characters onto the device and measured the electrical current that each neural network array generated in response.
The scientists discovered that the chip accurately identified clear pictures of each letter but had difficulty telling between hazy images, such as between the letters I and T. The chip then correctly recognized the pictures when the researchers swiftly replaced its processing layer with a superior "denoising" processor.
“We showed stackability, replaceability, and the ability to insert a new function into the chip,” MIT researcher Min-Kyu Song explains.
The applications are limitless, according to the researchers, who intend to expand the chip's sensing and processing capabilities.
“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” says Choi, who along with Kim previously designed a "smart" skin for monitoring vital signs.
Another concept, he continues, is for devices to include modular circuits that users may choose to assemble using the newest sensor and processing "bricks."
“We can make a general chip platform, and each layer could be sold separately like a video game,” according to Jeehwan Kim. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”
Reference: “Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence” by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin and Jeehwan Kim, 13 June 2022, Nature Electronics.