The tool predicts how fast the code functions on a chip, MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various applications.
To get code to run as fast as possible, developers and compilers programs that translate programming language into machine-readable code typically use performance models that run the code through a simulation of given chip architectures.
Compilers use that information to automatically optimize code, and developers use it to tackle performance bottlenecks on the microprocessors that will run it.
But performance models for machine code are handwritten by a relatively small group of experts and are not properly validated. As a consequence, the simulated performance measurements often deviate from real-life results.
In series of conference papers, the researchers describe a novel machine-learning pipeline that automates this process, making it easier, faster, and more accurate.
the researchers presented a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models.
They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive. During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself.
Ultimately, developers and compilers can use the tool to generate code that runs faster and more efficiently on an ever-growing number of diverse and “black box” chip designs. “Modern computer processors are opaque, horrendously complicated, and difficult to understand.
Designing performance models by hand can be “a black art,” Carbin says. Intel provides extensive documentation of more than 3,000 pages describing its chips’ architectures. But there currently exists only a small group of experts who will build performance models that simulate the execution of code on those architectures.
Intel’s documents are neither error-free nor complete, and Intel will omit certain things, because it’s proprietary, Mendis says. However, when you use data, you don’t need to know the documentation. If there’s something hidden you can learn it directly from the data.
To do so, the researchers clocked the average number of cycles a given microprocessor takes to compute basic block instructions — basically, the sequence of boot-up, execute, and shut down — without human intervention. Automating the process enables rapid profiling of hundreds of thousands or millions of blocks.
In training, the Ithemal model analyzes millions of automatically profiled basic blocks to learn exactly how different chip architectures will execute computation.
Importantly, Ithemal takes raw text as input and does not require manually adding features to the input data. In testing, Ithemal can be fed previously unseen basic blocks and a given chip, and will generate a single number indicating how fast the chip will execute that code.
The researchers found Ithemal cut error rates in accuracy meaning the difference between the predicted speed versus real-world speed by 50 percent over traditional hand-crafted models. Further, in their next paper, they showed that Ithemal’s error rate was 10 percent, while the Intel performance-prediction model’s error rate was 20 percent on a variety of basic blocks across multiple different domains.
The tool now makes it easier to quickly learn performance speeds for any new chip architectures, Mendis says. For instance, domain-specific architectures, such as Google’s new Tensor Processing Unit used specifically for neural networks, are now being built but aren’t widely understood.
If you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance,” Mendis says.
Next, the researchers are studying methods to make models interpretable. Much of machine learning is a black box, so it’s not really clear why a particular model made its predictions.
Our model is saying it takes a processor, say, 10 cycles to execute a basic block. Now, we’re trying to figure out why, Carbin says. That’s a fine level of granularity that would be amazing for these types of tools.
The study was published in Massachusetts Institute of Technology