Raytheon Intelligence & Space Helping DARPA Make Applications Run 100x Faster

Image credit: DARPA

Raytheon Intelligence & Space, a Raytheon Technologies business, is developing a state-of-the-art framework for training artificial neural networks under a $12 million contract from the Defense Advanced Research Projects Agency (DARPA) for the Fast Network Interface Cards (FastNICs) program. Although network performance has accelerated, network interface hardware has not kept pace, preventing applications from utilizing the full speed of the network. The FastNICs program aims to improve network interface card performance 100x through clean-slate networking approaches.


Artificial neural networks power many tasks today in computer vision and natural language processing, such as automated target recognition and tracking in Earth-observing satellite imagery, object recognition in self-driving cars, and language translation. In many cases, training these neural networks can take several weeks and cost tens of thousands of dollars per training run.


“Being able to train a neural network 100x faster means application developers can explore a hundred times more ideas, and exploit currently abundant but untapped datasets, which translates to direct value to customers and to the research community,” said Brad Tousley, president at Raytheon BBN Technologies.


The team at Raytheon BBN Technologies, with teammates from MIT and the University of Washington, are developing a general framework called BulletTrain. BulletTrain will be designed to determine automatically the best parallelization strategy for training a workload on the FastNICs hardware to achieve the desired speedups. The team aims to demonstrate the speedups on a range of workloads in computer vision and natural language processing.


Joud Khoury, principal investigator on the program at Raytheon BBN Technologies, explained, “Training neural networks is a massively parallelizable, computer-intensive workload that needs to move large volumes of data around very quickly. To take full advantage of the 100x faster network to achieve 100 times faster training, one has to divide the computation into very granular communication and computation tasks, and optimally schedule these on the underlying processors and network interconnect.”


Source: Raytheon Intelligence & Space