Microchip Advancement Draws Towards AI Edge Computing
For increasing the speed and efficiency of specialized artificial intelligence (AI) systems, Princeton University’s researchers have created a software.
The quality of this software is, it reduces power demand and the amount of data needed to be exchanged from remote servers. With this, AI applications like piloting software for drones can take place on the edge of computing infrastructure.
Princeton University is quite active on making such kinds of unique systems. Two years back, its researchers also made a new chip for improving the performance of neural networks. The chip could perform tens to hundreds of times better than other microchips on the market.
The chip was continuously refined over the next two years, and a software system was created to allow AI systems to utilize the new technology efficiently. The idea was that the new chips could allow systems to be scalable in hardware and execution of software.
There is a loophole also in this software, which is requires massive amounts of power and memory storage. For overcoming such issues, the researchers designed a chip that conducts computation and stores data in the same area, which is called in-memory computing. This technique cuts the energy and time needed to exchange information with dedicated memory.
In order to get around analog operation, which is required for in-memory computing and is sensitive to corruption, the team relied on capacitors instead of transistors in the chip design. Capacitors do not face the same effect by shifts in voltage, and they are more precise.
Despite various other challenges surrounding analog systems, they carry many advantages when used for applications like neural networks. The researchers are now looking to combine the two types of systems, as digital systems are central while neural networks relying on analog chips are able to run fast and efficient specialized operations.