Indias IIT researchers develop smart windows that regulate light, heat to conserve power in buildings

IIT Guwahati Researchers Develop New Memory Architectures Methods to Boost Processor Speeds

IIT Guwahati researchers have developed methods to solve the problems in computer systems domain. Specific contributions being in multi-core processor-based systems that need an equally large on-chip memory to commensurate the data demands of applications and hence preventing energy consumption to ensure the temperature remains under the thermal design power (TDP) budget.

Indian Institute of Technology Guwahati Researchers have made fundamental contributions to memory architectures by preventing redundancy in data values and improving slow and frequent writes in the multi-core processor systems.

IIT Guwahati researchers have developed methods to solve the problems in computer systems domain. Specific contributions being in multi-core processor-based systems that need an equally large on-chip memory to commensurate the data demands of applications and hence preventing energy consumption to ensure the temperature remains under the thermal design power (TDP) budget.

Explaining the challenges of multi-core processor-based systems,, “The application data access patterns are not uniformly distributed and hence leads to several orders of writes to certain memory locations compared to others. Such heavily written locations become prone to wear-out and thus prevents the use of complete memory devices without error corrections,” explained professor Hemangee K. Kapoor, Department of CSE, IIT Guwahati said

To handle this non-uniformity, IIT Guwahati researchers developed methods to evenly distribute the accesses across the overall memory capacity to reduce the wear-out pressure on heavily written locations and also worked in the area which avoids writing redundant values thus prolonging the wear-out.

“Slow and frequent writes can be re-directed to temporary SRAM partitions sparing the NVM from getting written with such frequent accesses. Such structures are called hybrid memories,” added Kapoor.

The researcher’s current and future contributions will help mitigate the drawbacks of promising emerging memories and ease their adaptability. Once some drawbacks are easily removed, scientists can find newer avenues for using such technologies without worrying about its limitations.

Artificial Intelligence (AI) and Machine Learning (ML) are used as tools to solve several real-time problems. However, they involve enormous computations on huge datasets. Building close to memory accelerators to process the data are efficient in performance as well as energy. The research team is also working on building customised parallel architecture designs to give better FLOPS.