Scientists have overcome a major obstacle in the burgeoning AI industry, Now computer scientists at Rice University have overcome a major obstacle in the developing artificial intelligence industry, showing that it is possible to accelerate deep learning technology without special GPU acceleration hardware.

Many companies invest in GPUs and other specialized hardware to conduct in-depth training, a form of strong artificial intelligence behind digital assistants.

Examples include Alexa and Siri, face recognition, product recommendation systems and other technologies.

For example, Nvidia, a manufacturer of the Tesla V100 core processor with the gold standard, recently reported a 41% increase in the fourth quarter compared to the previous year.

Rice researchers have developed a low-cost alternative to GPUs, an algorithm called “Deep-Learning Sublevel Engine” (SLIDE) that uses a general-purpose central processing unit (CPU) without special acceleration devices. Scientists have overcome a major obstacle in the burgeoning AI industry.

According to the researchers, our tests show that SLIDE is the first intelligent algorithm implementation of a comprehensive processor training that can outperform GPU hardware acceleration across recommended devices with a large fully connected architecture.

Standard back propagation training techniques for deep neural networks require matrix multiplication, the ideal GPU load.

This drastically reduces the cost of SLIDE computing compared to back propagation training.

For example, the top GPU platform offered by Amazon, Google, and others for deep learning cloud services has eight Tesla V100s and costs around $ 100,000, Shrivastava said.

The researchers say: We have one in the laboratory, and in our test case, we are taking the perfect workload for V100, which has more than 100 million parameters in a large fully connected network that is compatible with GPU memory. Scientists have overcome a major obstacle in the burgeoning AI industry.

Deep learning networks can contain millions or even billions of artificial neurons. Working together, they can learn to make expert decisions on a human scale only by studying large amounts of data.

For example, when a deep neural network is trained to identify photographic objects, it uses other neurons to recognize photos of cats instead of recognizing school buses.

It uses numerical methods to encode large amounts of information, e.g. B. entire website or book chapter, e.g. B. a series of numbers called hashes. A hash table is a list of hashes that can be searched very quickly.

It makes no sense to use our TensorFlow or PyTorch algorithm as the first thing you want to do is turn everything you do into a matrix multiplication problem.

What I mean by parallel data is that if I have two examples of data that I want to train, for example one is a picture of a cat and the other is a bus, they tend to activate different neurons and SLIDE can be updated or trained independently of each other. Scientists have overcome a major obstacle in the burgeoning AI industry.

The first experiment of the SLIDE study group produced significant cache drilling, but their training time was still comparable or faster than GPU training time.

The whole message is: “Don’t get stuck with multiplication matrices and GPU memory.

Researchers say: Our approach might be the first algorithmic approach to GPU wins, but I hope it’s not the last. The field requires new ideas and this is a big part of what MLSys means.