Apple released 8 small AI language models to compete with Microsoft’s Phi-3

Apple Released 8 Small Ai Language Models To Compete With Microsoft'S Phi-3



Sensing strength in numbers, Apple has made a strategic move into the competitive artificial intelligence marketplace by launching eight small AI models. The compact tools, collectively called OpenELM, are on-device and offline – suitable for smartphones.

Published on the open source AI community Hug face, the models are presented in 270 million, 450 million, 1.1 billion and 3 billion parameter versions. Users can download Apple's OpenELM in pre-trained or manual versions.

The pre-trained models provide a base upon which users can adjust and develop. Manually tuned models are pre-programmed to respond to instructions, making them ideal for conversations and interactions with end users.

While Apple doesn't provide specific use cases for these models, they can be implemented to run assistants that can analyze emails and texts or make intelligent suggestions based on the data. This is a similar approach Taken by GoogleIt deploys the Gemini AI model on the Pixel smartphone line.

Tokenmetrics

The models are trained on publicly available datasets, and Apple is sharing both the code for CoreNet (the library used to train OpenELM) and the “recipes” for the models. In other words, users can explore how Apple built them.

Apple's release will come shortly Microsoft announced Phi-3, a family of small language models that can run locally. Phi-3 Mini, a 3.8 billion parameter model trained with 3.3 trillion tokens, can still handle 128K token contexts, which is comparable to GPT-4 and beating Lama-3 and Mistral Big in token capacity.

Open-source and lightweight, Phi-3 Mini can replace traditional assistants like Apple's Siri or Google's Gemini for some tasks, and Microsoft has already tested Phi-3 on the iPhone and reported satisfactory results and fast generations.

Although Apple has not yet included these new AI language modeling capabilities in its user devices, it is the upcoming iOS 18 update. Talk about it To include new AI features that use on-device processing to ensure user privacy.

Apple's hardware offers advantages for the use of local AI, particularly the ability to integrate the device's RAM with the GPU's video RAM or VRAM. This means that a Mac with 32 GB of RAM (a common configuration in a PC) can use that RAM as GPU VRAM to run AI models, making it a more efficient option for local AI processing.

Windows tools Facing the deficit In this regard, the device's RAM and GPU VRAM are separated. To add RAM to run AI models, users usually need to purchase a powerful 32GB GPU.

However, Apple lags behind Windows/Linux in the area of ​​AI development. Most AI applications revolve around hardware designed by Apple and built by Nvidia, supporting its own chips. This means that there is relatively little Apple-native AI development, and as a result, using AI in Apple products requires semantic layers or other complex processes.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest