Menu
in

#Small language models based on Homo Sapiens explain AI efficiency.

Tech companies are now focusing on developing smaller language models (SLMs) that can match or even outperform larger ones like Meta’s Llama 3, OpenAI’s GPT-3.5 and GPT-4. Microsoft’s Phi-3 family and Apple Intelligence have significantly fewer parameters but are gaining popularity due to their energy efficiency and ability to run on devices like smartphones and laptops. The performance gap between large and small language models is narrowing, prompting tech companies to explore alternative avenues for performance upgrades.

In tests conducted by Microsoft, the Phi-3-mini model with 3.8 billion parameters rivaled larger models in certain areas, showcasing the potential of SLMs. While SLMs can achieve similar levels of language understanding and reasoning as larger models, they are limited by their size for certain tasks and struggle with storing vast amounts of factual knowledge. Combining SLMs with online search engines can help address this limitation.

Researchers believe that studying how children learn language efficiently could lead to significant improvements in scaling up SLMs to larger models. Despite the challenges, the rise of SLMs presents an opportunity for smaller businesses and labs to leverage advanced language models without the need for expensive hardware setups.

Source link

Source link: https://www.techradar.com/pro/no-one-knows-what-makes-humans-so-much-more-efficient-small-language-models-based-on-homo-sapiens-could-help-explain-how-we-learn-and-improve-ai-efficiency-for-better-or-for-worse

Leave a Reply

Exit mobile version