Meta’s new LLM Compiler could transform the way software is compiled and optimized
Meta’s Artificial Intelligence Research Team Said Today It’s Open-Sourcing a Suite of Robust AI Models
In a blog post, Meta’s systems research team explains that training LLMs is a resource-intensive and hugely expensive task that involves extensive data collection and large numbers of graphics processing units. As such, the process is prohibitive for many organizations and researchers.
However, the team believes that LLMs can help to simplify the LLM training process through their application in code and compiler optimization, which refers to the process of modifying software systems to make them work more efficiently or use fewer resources.
The researchers said the idea of using LLMs for code and compiler optimization is one that has been underexplored. So they set about training the LLM Compiler on a massive corpus of 546 billion tokens of LLVM Project and assembly code, with the purpose of making it able to “comprehend compiler intermediate representations, assembly language and optimization techniques.”