By navigating our site, you agree to allow us to use cookies, in accordance with our Privacy Policy.

Advancing Power Efficiency in AI: shared by Yevgeni Yermolin

Advancing Power Efficiency in AI: shared by Yevgeni Yermolin

By 2030, most devices will be internet-connected, necessitating efficient networking and hardware capabilities. Computer engineers will facilitate this transition as more aspects of daily life move online.  Among those working on energy efficiency is Yevgeni Yermolin, a distinguished researcher in visual sensing and technology, known for his pivotal work at the Technion, an institution ranked among the top 100 universities and engineering schools in the world.

Yevgeni Yermolin turned theoretical concepts into practical hardware solutions and worked on advancing power efficiency essential for improving overall system performance and sustainability, he is actively involved with the IEEE, a prestigious organization establishing protocols and libraries that serve the entire engineering community. Yevgeni will share emerging trends poised to revolutionize computing, enhance performance, and expand the capabilities of various applications, exploring several key trends in power efficiency in AI computing systems and the ways to disseminate the latest insights and developments with peers in their field.

AI-Optimized Hardware

The increasing demand for AI-specific hardware is driven by the growing complexity and diversity of AI applications, particularly in deep learning and neural network training and it underscores the importance of innovative research environments like Technion, Visual Sensing Theory & Applications Laboratory (VISTA) placed in Computer Science faculty, where Yevgeni Yermolin significantly contributed to advancing the field. It is known that The Council for Higher Education reports that in Computer Science, Technion has been ranked 32nd globally among the most cited institutions over the past 25 years, according to Computer Science-Rankings 2024 conducting research that spans the entire spectrum of computer science, often intersecting with mathematics, physics, engineering, and medicine. Its work is at the forefront of technological and scientific advancements, pushing the boundaries of innovation. Yevgeni shares, “The lab’s mission was to delve into theoretical concepts and convert these ideas into practical hardware solutions. It was exhilarating to transform abstract theories into hardware implementations, which demanded creativity and teamwork. Each project required careful consideration of logical element requirements, power consumption, and chip area allocation. The collaborative spirit in the lab encouraged open discussions and brainstorming sessions, often leading to unexpected breakthroughs. I learned a lot about balancing theory with practical constraints and seeing our ideas come to life as functional prototypes was incredibly rewarding. This comprehensive approach was essential for assessing the feasibility and profitability of their designs, ensuring that each implementation was both efficient and effective.”

Energy Efficiency and Sustainability

As AI systems become more pervasive, the energy consumption associated with running AI models is a growing concern. This has led to a trend in hardware design focused on energy efficiency and sustainability. Yevgeni Yermolin states: “Designers are developing low-power AI chips and exploring new materials and architectures to reduce the environmental impact of AI computing.” As for Yevgeni, he described the results of his work in an article titled “Feature Map Transform Coding for Energy-Efficient CNN Inference,” focusing on implementing neural networks in hardware and exploring techniques to optimize convolutional neural networks (CNNs) for reduced energy consumption, enhancing their efficiency and performance in hardware applications while maintaining inference accuracy.

Evgeni adds: “Tech companies are also investing in research to create AI models that require less computational power without sacrificing performance. Techniques such as model compression, pruning, and quantization are being used to optimize AI models for energy-efficient hardware, making AI more sustainable and accessible.”

Energy-Efficient Algorithms

The development of energy-efficient algorithms is another trend focused on reducing the power consumption of AI systems. These algorithms are designed to optimize computational processes, ensuring that AI tasks are completed with minimal energy expenditure. This includes advancements in machine learning techniques that require fewer resources to achieve high performance.

Yevgeni Yermolin conducted research at Technion focusing on improving power efficiency in AI computing systems, specifically tackling issues related to memory access and processing demands and he shares, “In AI image processing, power efficiency is crucial, especially as the demand for autonomous devices powered by batteries continues to rise. Processing video images for movement necessitates constant reading and writing to memory, which can be a significant drain on energy resources. My research at Technion specifically targeted this issue through innovative image encoding techniques that effectively reduce the volume of data written to memory. By minimizing memory access—one of the largest consumers of power—we can significantly enhance energy efficiency in these systems. This approach not only supports longer battery life for devices but also contributes to more sustainable AI applications overall.”

The Role of Specialized Hardware in Advancing Neural Networks

Neural network advancements are increasingly driving innovations in hardware implementation, focusing on creating specialized processors. These hardware solutions are designed to handle the intensive parallel processing demands of neural networks, improving efficiency and speed. The integration of AI-specific hardware accelerates the deployment of neural networks in various applications, from real-time data analysis to autonomous systems. Yevgeni Yermolin, who had an opportunity to discuss this topic at the IJCNN 2020 Conference, shares, “I focused on discussions surrounding various hardware implementations, which were particularly relevant to my work as a logic designer. One of the key insights I gained was the critical importance of scalability in neural networks, especially when integrating them into hardware solutions. As applications for neural networks continue to evolve rapidly, there’s a clear trend towards the development of products based on Application-Specific Integrated Circuits (ASICs) and Field-Programmable Gate Arrays (FPGAs).”

These technologies hold the potential to significantly enhance the efficiency and performance of neural network systems, as they allow for customized hardware configurations tailored to specific tasks. This shift towards specialized hardware is likely to facilitate faster processing speeds, lower power consumption, and improved overall system performance. Moreover, the insights shared at the conference emphasized the need for research that not only focuses on algorithmic advancements but also addresses the hardware capabilities required to support these next-generation applications. This holistic approach will be crucial in driving the future of neural networks and ensuring they can meet the demands of increasingly complex tasks.

Yevgeni adds: “Sharing knowledge about these technologies through such conferences is essential, as they have the potential to significantly boost the efficiency and performance of neural network systems through customized hardware configurations tailored to specific tasks. This move towards specialized hardware promises faster processing speeds, reduced power consumption, and enhanced overall system performance.”

Bridging the Gap: The Essential Synergy Between Academic Research and Industry in AI

To become a professional in AI, understanding the relationship between academic research and industry needs is crucial. There’s a common misconception that academic research is mostly theoretical and slow to impact practical applications. However, this view is misleading. Much of the research conducted in universities is driven by industry needs and often funded by companies eager to stay at the cutting edge of technology. These companies sponsor research to leverage academic breakthroughs for their competitive advantage and innovation. 

This collaboration results in projects that address real-world problems, ensuring research is immediately relevant. Yevgeni shares, “The strong connections between academia and industry also create a smooth transition for students entering the workforce. Many graduates, after completing their master’s or doctoral studies, move directly into research and development roles in leading companies, illustrating how academic research fuels industry innovation.

Ultimately, the synergy between academia and industry is essential for advancing technology and tackling complex challenges efficiently.”

To share his expertise with colleagues, Yevgeni Yermolin authored three articles related to the hardware implementation of neural networks. These include “Feature Map Transform Coding for Energy-Efficient CNN Inference,” which explores optimizing energy consumption in convolutional neural networks, “HCM: Hardware-Aware Complexity Metric for Neural Network Architectures,” focusing on evaluating neural network designs, and “Early-Stage Neural Network Hardware Performance Analysis,” which examines performance metrics in the initial phases of hardware development.

Speaking about continuous improvement Yevgeni Yermolin adds that his involvement with IEEE has significantly influenced his career and research, providing a platform for accessing major conferences and top-ranking journals, which are essential in the field. He declares: “IEEE’s commitment to education and global collaboration was evident to him even as a student, highlighting its role in supporting learners with valuable resources. As a senior member, I find it an honor to be part of an organization that sets crucial engineering protocols and libraries, benefiting the entire community. This affiliation connects everyone with a vast network of professionals, fostering collaboration and knowledge exchange, which enriches our work and drives innovation. As we move towards an increasingly interconnected world, the insights and developments shared by experts will be essential in ensuring that technological advancements are both efficient and impactful.”

Tags

Vidushi Saxena

Passionate journalist with a Bachelors in Journalism and Mass Communication, dedicated to crafting compelling news articles and avidly exploring the dynamic world of current affairs through insightful blog readings. Embracing the power of words to inform and inspire.

Related Articles

Upcoming Events