Google has earned a reputation for pushing out new AI technologies and upgrades at a remarkable pace and their announcement of EfficientNet serves as the latest example. Leveraging their work with AutoML, Google’s scientists employed a scaling method that offers up to a tenfold increase in network efficiency.
The company writes: “The conventional practice for model scaling is to arbitrarily increase the CNN depth or width, or to use larger input image resolution for training and evaluation. While these methods do improve accuracy, they usually require tedious manual tuning, and still often yield suboptimal performance. What if, instead, we could find a more principled method to scale up a CNN to obtain better accuracy and efficiency?”
Google software engineer Mingxing Tan explains the new development:
Unlike conventional approaches that arbitrarily scale network dimensions, such as width, depth and resolution, our method uniformly scales each dimension with a fixed set of scaling coefficients. Powered by this novel scaling method and recent progress on AutoML, we have developed a family of models, called EfficientNets, which superpass [sic] state-of-the-art accuracy with up to 10x better efficiency (smaller and faster).
These networks are well-suited for tasks like image classification and facial recognition which offer advantages for high usage scenarios as well as the use of more accurate and efficient models in mobile technology. Like most AI of its kind, EfficientNet utilizes pre-trained CNNs (convolutional neural networks) designed to perform image-related tasks as a base network. These base networks can learn from a range of more generalized visual data sets to allow faster creation of more specific models with limited training data.
While the standard arbitrary scaling process still yields functional results, EfficientNet first conducts a grid search of the base network to determine the relationships between the network’s different scaling dimensions (e.g. width and height) while considering both the size of the model and available computational resources. EfficientNet then scales up the base network based on this assessment. The results from initial testing indicate a higher level of accuracy and speed in the majority of circumstances.
EfficientNet also performed exceptionally well with over half of the eight most commonly-utilized image datasets, such as CIFAR-100 (91.7%) and Flowers (98.8%). Because this new method may significantly improve computer vision tasks across the board, Google has open-sourced EfficientNet with access through GitHub.
Given that image recognition models have a bit of a reputation for making strange mistakes, EfficientNet may help mitigate that problem across the board as AI developers build on Google’s recent efforts.
Samsung Stuffs 1.2TFLOP AI Processor Into HBM2 to Boost Efficiency, Speed
Samsung has developed a new type of processor-in-memory, built around HBM2. It's a new achievement for AI offloading and could boost performance by up to 2x while cutting power consumption 71 percent.
New ARMv9 Cortex X-2, A710 CPUs Deliver Major Efficiency Gains
ARM is announcing new CPUs today for its ARMv9 architecture. The Cortex-X2, Cortex-A710, and Cortex-A510 deliver a solid set of performance improvements and efficiency gains.
AMD Wants to Improve AI, HPC Efficiency 30x by 2025
AMD has a new plan for improving energy efficiency and claims it can deliver 30x gains by 2025.
New Material Efficiently Generates Hydrogen from Water
Generating hydrogen from water requires a lot of power and expensive materials, but researchers from Washington State University may have developed a method that could make it a viable way to store energy cheaply and efficiently.