Aligned with whom? Direct and social goals of AI systems


As artificial intelligence (AI) becomes more powerful and pervasive, the problem of AI alignment – ​​how to ensure that AI systems pursue the goals we want them to pursue – has drawn attention. growing. This article distinguishes two types of alignment problems according to the objectives considered, and analyzes the different solutions required by each. The direct alignment problem examines whether an AI system achieves the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or society more broadly. In particular, it also examines whether the system imposes externalities on others. While solutions to the direct alignment problem typically focus on more robust implementation, social alignment issues typically arise due to conflicts between individual and group goals, which heightens the importance of governance. AI to arbitrate these conflicts. Solving the problem of social alignment requires both applying existing standards to their developers and operators and designing new standards that apply directly to AI systems.

Download the full working document here.

The Brookings Institution is funded through support from a wide range of foundations, corporations, governments, individuals, as well as an endowment. The list of donors can be found in our annual reports published online here. The findings, interpretations and conclusions of this report are the sole responsibility of their author(s) and are not influenced by any donation.