Google’s EfficientNet Offers up to a 10x Boost in Image Analysis Efficiency

Google has earned a reputation for pushing out new AI technologies and upgrades at a remarkable pace and their announcement of EfficientNet serves as the latest example. Leveraging their work with AutoML, Google’s scientists employed a scaling method that offers up to a tenfold increase in network efficiency.
The company writes: “The conventional practice for model scaling is to arbitrarily increase the CNN depth or width, or to use larger input image resolution for training and evaluation. While these methods do improve accuracy, they usually require tedious manual tuning, and still often yield suboptimal performance. What if, instead, we could find a more principled method to scale up a CNN to obtain better accuracy and efficiency?”

Google software engineer Mingxing Tan explains the new development:
Unlike conventional approaches that arbitrarily scale network dimensions, such as width, depth and resolution, our method uniformly scales each dimension with a fixed set of scaling coefficients. Powered by this novel scaling method and recent progress on AutoML, we have developed a family of models, called EfficientNets, which superpass [sic] state-of-the-art accuracy with up to 10x better efficiency (smaller and faster).
These networks are well-suited for tasks like image classification and facial recognition which offer advantages for high usage scenarios as well as the use of more accurate and efficient models in mobile technology. Like most AI of its kind, EfficientNet utilizes pre-trained CNNs (convolutional neural networks) designed to perform image-related tasks as a base network. These base networks can learn from a range of more generalized visual data sets to allow faster creation of more specific models with limited training data.
While the standard arbitrary scaling process still yields functional results, EfficientNet first conducts a grid search of the base network to determine the relationships between the network’s different scaling dimensions (e.g. width and height) while considering both the size of the model and available computational resources. EfficientNet then scales up the base network based on this assessment. The results from initial testing indicate a higher level of accuracy and speed in the majority of circumstances.

EfficientNet also performed exceptionally well with over half of the eight most commonly-utilized image datasets, such as CIFAR-100 (91.7%) and Flowers (98.8%). Because this new method may significantly improve computer vision tasks across the board, Google has open-sourced EfficientNet with access through GitHub.
Given that image recognition models have a bit of a reputation for making strange mistakes, EfficientNet may help mitigate that problem across the board as AI developers build on Google’s recent efforts.
Continue reading

Analysts Predict Rapid DDR5 Adoption by 2023
Analysts are predicting a fast ramp for DDR5 when it debuts late this year — much faster than the DDR3-to-DDR4 transition.

Quantum Analysis of Ancient Space Dust Reveals Why the Inner and Outer Planets Differ
New analysis of ancient meteorite dust has provided evidence for a physical gap in the sun's protoplanetary disk. It may explain two old problems in planetary astronomy.

Analysis Shows Amazon’s Alexa Collects More Data Than Any Other Smart Assistant
If you're adamant about buying a smart assistant, Alexa might not be your best option. A new privacy comparison rates the commerce giant's devices dead last.

New Analysis of Iconic Miller-Urey Origin of Life Experiment Asks More Questions Than It Answers
Building on the original Miller-Urey experiments, new work shows that some ingredients of the "primordial soup" came from a thoroughly unexpected place. The results may have implications for our search for life off-planet — as well as our quest to understand how it arose here on Earth.