What if you don’t ever have to discard your cellphones, smartwatches, and other wearable devices for a newer model?
Doesn’t sound likely, eh?
Researchers at the Massachusetts Institute of Technology have done just that. They figured such electronic devices could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip – like LEGO bricks.
Such reconfigurable chipware could keep devices up to date while reducing electronic waste.
“It came to our attention that conventional chips with hard-wired connections are not reconfigurable, meaning that the functionality of a system is fixed. In the internet-of-things (IoT) environment, there’s a growing interest in multi-functionality and reconfigurability combined with sensor networks, and thus we envisioned the idea of LEGO-like stackable chips without hard-wired connections,” the team told IE.
The team’s results are published in Nature Electronics.
Brick by brick
The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow the chip’s layers to communicate optically. Now, other modular chip designs generally employ conventional wiring to relay signals between layers.
Such intricate designs are impossible to sever and rewire, thereby making stackable designs not reconfigurable.
Instead of physical wires, the MIT design uses light to transmit information through the chip. Layers can be swapped or stacked to add new sensors or updated processors.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” MIT postdoc Jihoon Kang said in a press release. We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
Since data transfer is done by light in free space, the chips aren’t hard-wired, making them easily replaceable with other chips with different functionalities, the team said.
An optical system enables communication
Currently, the design is configured to carry out basic image-recognition tasks, which is done through a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip”.
Arrays can be trained to process and classify signals directly on a chip, sans external software or an Internet connection.
Image sensors were paired with artificial synapse arrays – that were trained to recognize the letters M, I, and T. Now, if it were a conventional approach, the sensor’s signals would be relayed to a processor via physical wires. Here, the team fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers without the requirement for a physical connection.
“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” said MIT postdoc Hyunseok Kim.
How does it work?
The team’s optical communication system comprises paired photodetectors and LEDs patterned with tiny pixels. The photodetectors feature an image sensor for receiving data, and LEDs transmit that data to the next layer.
Since the components must work like a LEGO-like reconfigurable AI chip, they must be compatible.
“The sensory chip at the bottom receives signals from the outside environment and sends the information to the next chip above by light signals. The next chip, which is a processor layer, receives the light information and then processes the pre-programmed function. Such light-based data transfer continues to other chips above, thus performing multi-functional tasks as a whole,” the team explained.
The team fabricated a single chip with a computing core that measured about four square millimeters. The chip is stacked with three image recognition “blocks”, each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T.
They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response.
The team noted that the chip correctly classified clear images of each letter. It was less able to distinguish between blurry images, but the researchers were able to swap out the chip’s processing layer for a better processor.
The chip then accurately identified the images.
‘Each layer could be sold separately like a video game’
The researchers’ biggest challenge was realizing a reliable light-based data transfer between chips.
“We designed a unique device architecture comprised of thru-holes in silicon wafers, which work as a path for efficient light-based communications without cross-talk between nearby pixels,” they told us.
So, what’s next?
“So far, we only showed letter recognition and denoising function from light input as a proof-of-concept demonstration. The reconfigurability of our system allows changing the sensory layer (which was the image sensor) to other sensory layers. By this, we aim to implement our system to processing other sensory inputs, such as haptic, auditory, and other biomedical signals,” they said.
Jeehwan Kim, associate professor of mechanical engineering at MIT, said that a general chip platform could be made, and each layer could be sold separately like a video game. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO,” he said.
Though making commercial products is a ‘totally different story’, the researchers hope to be able to see such Lego-like stackable and reconfigurable AI chips within five years.