This Ultra Light AI model fits on your phone and can beat ChatGPT.

This Ultra Light Ai Model Fits On Your Phone And Can Beat Chatgpt.


Phi-3—the third iteration of Phi Small Language Models (SLMs)—outperforms and outperforms models of comparable size, Microsoft announced today. A few big people.

A Small Language Model (SLM) is a type of AI model designed to be extremely efficient at performing certain language-related tasks. Unlike large language models (LLMs) that are best suited for a variety of general tasks, SLMs are built on smaller datasets to be more efficient and cost-effective for specific use cases.

Microsoft said the Phi-3 will come in several versions, including the smallest Phi-3 Mini, a 3.8 billion meter model equipped with 3.3 trillion tokens. Despite its relatively small size—the Llama-3 corpus weighs in at over 15 trillion data tokens—Phi-3 Mini can still handle 128,000 context tokens. This makes it comparable to GPT-4 and beats Lama-3 and Mistral big in token capacity.

In other words, AI behemoths like Llama-3 Meta.ai and Mistral Large may fail after a long discussion, or well before this lightweight model starts to struggle.

okex

One of the most significant advantages of the Phi-3 Mini is its ability to fit and run on a standard smartphone. Microsoft tested the model on the iPhone 14, and it generated 14 tokens per second without any problems. Running the Phi-3 Mini requires only 1.8GB of RAM, making it a lightweight and efficient option for users who need more attention.

While the Phi-3 Mini may not be suitable for high-end coders or those with extensive requirements, it can be a viable option for users with specific needs. For example, beginners or LLMs who need a chatbot for data analysis can use the Phi-3 Mini for tasks such as data organization, information extraction, accounting, and building agents. If the model is given access to the Internet, it can be very powerful, compensating for the lack of real-time information.

Due to Microsoft's focus on making the database as useful as possible, the Phi-3 Mini received high test scores. The wider Phi family is, in fact, not ideal for tasks that require real intelligence, but their high reasoning abilities put them above their major competitors. The Phi-3 Medium (the 14-billion gauge model) consistently beats powerful LLMs like the GPT-3.5—the LLM that powers the free version of ChatGPT—and the Mini version beats powerful models like the Mixtral-8x7B in most synthetic metrics.

Screenshot 2024 04 23 102615

But it's worth noting that Phi-3 is not open source like its predecessor, Phi-2. Instead, it is an open model, meaning it is accessible and ready to use, but does not have an open source license like Phi-2, which allows for wider use and commercial applications.

In the coming weeks, Microsoft says it will release additional models in the Phi-3 family, including the Phi-3 Small (7 billion units) and the aforementioned Phi-3 Medium.

Microsoft has made the Phi-3 Mini available on Azure AI Studio, Hugging Face and Olama. The model is custom-tuned and optimized for the ONNX Runtime with Windows Direct ML support, as well as cross-platform support on a variety of GPUs, CPUs, and mobile hardware.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest