Skip to content

Meta launches LIMA to compete with ChatGPT and Bard in the race for the most advanced AI

Goal has presented LIMA, a large language model with which they demonstrate that high-quality responses can be obtained from a small set of prompts with a previously trained model.

LIMA (Less Is More for Alignment) is based on LLaMa, a model with 65 billion parameters that the technology company provided for research purposes in the first quarter of the year.

LOOK: Meta works on a platform based on Instagram to compete with Twitter

Meta explains that large language models are usually trained in two phases: an unsupervised pre-training of raw text, so that it learns general representations, and a large-scale one of adjustment and reinforcement learning, with which it is intended that the AI better aligns with end tasks and user preferences.

with LIMEMeta aims to demonstrate that it is possible to obtain quality results from a few indications with a model that has been extensively trained before. And to do this, he has used a thousand carefully curated examples of real instructions, 750 from forums such as Stack Exchange and wikiHow and another 250 written by the researchers themselves.

To analyze its performance, have compared it with GPT-4 from OpenAI, Claude from Anthropic and Bard from Google with a controlled test of 300 indications. The results they obtained show that LIMA produces “same or preferable” responses in 43%, 46%, and 58% of cases, respectively.

As reflected in the study published in arxiv.org, on an absolute scale, LIMA’s responses “reveal that 88% meet the immediate requirements, and 50% are considered excellent”the researchers point out.

Source: Elcomercio

Share this article:
globalhappenings news.jpg
most popular