Meta has introduced a new approach in the field of artificial intelligence. The company plans to develop AI that can perform all tasks like a human in the future. However, in certain cases, Meta has decided not to release potentially dangerous and highly capable AI systems to the public.
In its "Frontier AI Framework" document, Meta has identified two types of risky AI systems: "high-risk" and "critical-risk." These systems could potentially contribute to cyberattacks or assist in the creation of chemical and biological weapons. "Critical-risk" systems could cause massive, irreversible harm.
Meta does not use specific scientific tests to measure these risks. Instead, the company relies on the opinions of its employees and other experts in the field. If Meta deems a system to be risky, it will either use it internally or withhold it from the public until adequate security measures are in place.
With this approach, Meta aims to present the technology more cautiously and securely, in contrast to other companies that openly release powerful AI systems.