A new AI training technique is very fast, says Google

A New Ai Training Technique Is Very Fast, Says Google


Google DeepMind researchers have unveiled a new method to speed up AI training, significantly reducing the computational resources and time required to perform the task. This new approach to a traditionally energy-intensive process could make AI development faster and cheaper, according to a recent research paper — and that could be good news for the environment.

“Our approach—multimodal comparative learning with joint instance selection (JEST)—outperforms state-of-the-art models with up to 13 times less iteration and 10 times less computation,” the study said.

The AI ​​industry is known for its high energy consumption. Large-scale AI systems like ChatGPT require a lot of processing power, which in turn requires a lot of power and water to cool these systems. For example, Microsoft's water consumption is reported to have risen by 34% from 2021 to 2022 due to increased AI computing demands, while ChatGPT was accused of consuming nearly half a liter of water every 5 to 50 requests.

The International Energy Agency (IEA) data center projects that electricity consumption will double from 2022 to 2026 – drawing a contrast between AI's energy needs and the energy profile often criticized in the cryptocurrency mining industry.

Phemex

However, approaches such as JEST may provide a solution. By optimizing the selection of data for AI training, according to Google, JEST can significantly reduce the frequency and computational power required, which will reduce overall energy consumption. This approach is consistent with efforts to improve the efficiency of AI technologies and reduce their environmental impact.

If the technique is reasonably effective, AI trainers will need only a fraction of the energy they use to train their models. This means they can create more powerful AI tools with the same resources they currently use, or use fewer resources to develop new models.

How JEST works

JEST works by selecting additional data sets to maximize the AI ​​model's learning ability. Unlike traditional methods that select individual examples, this algorithm looks at the composition of the entire set.

For example, imagine that you are learning several languages. Rather than learning English, German and Norwegian separately, perhaps in order of difficulty, you will find it more effective to study them together in such a way that the knowledge of one supports the learning of the other.

Google has taken a similar approach, and it's been successful.

“Together we demonstrate that selecting datasets is more effective for learning,” the researchers said in their paper.

To do this, Google researchers used “multimodal comparative learning,” in which the JEST process identifies dependencies between data points. This method improves the speed and efficiency of AI training while requiring much less computing power.

The key to the approach was to start with pre-trained reference models to guide the data selection process, Google explained. This method allowed the model to focus on high-quality and well-structured data sets, which further improves the effectiveness of training.

“Group quality is also a function of composition, in addition to the overall quality of the data points included independently,” the paper explained.

The study's tests showed strong performance gains across multiple benchmarks. For example, training on a typical WebLI dataset using JEST has shown dramatic improvements in learning speed and resource efficiency.

The researchers also found that the algorithm found subsets that could be learned faster, speeding up the training process by focusing on specific data that “matched” together. This technique, called “data quality bootstrapping,” values ​​quality over quantity and has been found to be better for AI training.

“A reference model trained on a small dataset can efficiently handle a much larger dataset, allowing model training to robustly outperform the reference model in many downstream operations,” the paper states.

Edited by Ryan Ozawa.

Generally intelligent newspaper

A weekly AI journey narrated by General AI Model.

Leave a Reply

Pin It on Pinterest