Meta opens access to a wider range of language models for Artificial Intelligence (AI) research.
The company, which now dominates the social media market, says the language model it opens up to is the first 175 billion parameter language model available to the wider AI research community.
Citing Reuters on Wednesday (4/5), this language model is a natural language processing system that is trained with very large volumes of text, and is able to answer questions about reading comprehension or generate new text.
The released language model “Open Pretrained Transformer (OPT-175B)” will enhance the ability of artificial intelligence researchers to understand how broad language models work.
The company, founded by Mark Zuckeberg, said that if access to sophistication is limited, then it is tantamount to his company hindering progress in efforts to improve and maximize technology.
According to them, this will create bias and hinder the resolution of existing problems.
Artificial intelligence technology, which is a key area of research and development for several major online platforms, can perpetuate human social bias around issues such as race and gender.
Some researchers have concerns about the dangers that this broad language model can spread.
Therefore, Meta hopes that by opening up access to its technology for research, it can create a diversity of voices that define the ethical considerations of its technology.
For Meta, this is a way to prevent abuse and keep the company supporting science.
Meta said access to the broad language model would be granted to academic researchers and people affiliated with government, civil society and academic organizations, as well as industrial research laboratories.
This access will include a pre-trained model and code to train and use. (Ant/OL-12)