Upcoming Biden Executive Order to Regulate AI Spurs Praise, Doubt and New Research on AI Accountability
Biden's impending executive order on AI regulation draws reactions as research suggests devices could self-regulate through a public-policy-aligned constitutional model; experts argue for balanced, broad-based input in policy formulation and caution against stifling innovation or favoring large tech firms.
- President Joe Biden's impending executive order on AI regulation has spurred mixed reactions; some see government involvement as playing 'catch-up' to AI innovators and ethicists, while others praise it as a necessary first step in safeguarding freedoms and urging for a broader framework.
- Anthropic, an Amazon-backed AI startup, has published research on 'Constitutional AI' that considers tackling bias as a transparency issue. It develops AI governed by a list of ethical considerations and moral commitments, transforming AI into a self-contained experiment in liberalism.
- Public opinion heavily influences Anthropic's approach to AI development. Its survey of 1,094 respondents shows a general agreement that AI should not perpetuate harmful, aggressive, or discriminatory behavior; however, precise definitions of what constitutes such behavior differ among public sectors.
- 15 major AI developers have signed a voluntary agreement with the White House committing to share data about AI safety with the government. This executive order is expected to expand on those commitments.
- Critics of the executive order argue that it may exaggeratively cater to large tech companies, potentially making entry into the AI industry difficult for innovators and small companies. They also express concerns about politicians using AI and regulations as a guise for stifling dissent and opposing viewpoints.