Tag: governance

  • Governing AI: The Challenge of Regulating Transformative Technology

    Governing AI: The Challenge of Regulating Transformative Technology

    Artificial intelligence is advancing at a pace that outstrips our capacity to govern it effectively—rather like attempting to write traffic regulations whilst Formula One cars whizz past at breakneck speed. The transformative potential of AI to revolutionise healthcare, reshape economies, and redefine warfare demands robust regulatory frameworks that balance innovation with safety, fairness, and accountability. Yet governments worldwide struggle to craft policies that address AI’s multifaceted risks without stifling beneficial development or, perhaps more importantly, without looking hopelessly out of touch. The question of how to govern AI has become one of the defining policy challenges of our era (Calo, 2017; Yeung, 2018).

    The complexity of AI systems themselves poses fundamental regulatory hurdles. Unlike traditional technologies with clear causal mechanisms—pull lever, thing happens—modern machine learning models operate as ‘black boxes’, making decisions through opaque processes that even their creators struggle to explain (Burrell, 2016). This opacity complicates efforts to ensure AI systems comply with existing laws governing discrimination, liability, or safety. Moreover, AI development occurs primarily in the private sector, often behind proprietary walls, limiting regulators’ visibility into emerging capabilities and risks. Traditional regulatory approaches designed for slower-moving industries—think pharmaceuticals with their decade-long approval processes—prove about as adequate for AI as a horse-drawn carriage would be for modern motorway travel (Marchant et al., 2011).

    International coordination presents another formidable challenge, requiring diplomacy that makes herding cats look straightforward. AI development is global, with leading capabilities concentrated in a handful of nations, particularly the United States and China, whilst deployment affects populations worldwide. This creates risks of regulatory arbitrage, wherein companies relocate operations to jurisdictions with laxer rules, and geopolitical tensions as nations compete for AI supremacy (Bradford, 2020). Efforts towards international frameworks, such as the OECD’s AI principles, represent important first steps yet lack enforcement mechanisms and struggle to bridge divergent national interests and values—a challenge roughly equivalent to getting the world’s nations to agree on anything more contentious than the existence of gravity (OECD, 2019).

    Several regulatory models have emerged, each with trade-offs that make choosing between them rather like selecting the least problematic way to navigate a minefield. The European Union’s AI Act attempts comprehensive, risk-based regulation categorising AI applications by potential harm (European Commission, 2021). The United States favours sectoral approaches, with different agencies addressing AI within their domains—finance, healthcare, transport—though this creates gaps and inconsistencies that resourceful companies exploit with admirable ingenuity. China combines government oversight with strategic support for domestic AI champions, prioritising state control alongside innovation. No consensus has emerged on which approach best balances competing priorities, and regulatory experimentation continues across jurisdictions (Smuha, 2021).

    Effective AI governance requires more than regulation alone—it demands a comprehensive approach that would make multi-tasking look simple. Investment in technical research on AI safety and interpretability, creation of standards bodies to establish best practices, mechanisms for democratic input into AI deployment decisions, and international cooperation despite geopolitical rivalries all feature prominently on the to-do list (Dafoe, 2018). Perhaps most crucially, it requires regulatory institutions staffed with expertise to understand rapidly evolving technology—a challenge when the private sector can offer AI experts salaries that make government positions look like charity work. As AI capabilities continue advancing towards artificial general intelligence, the stakes of governance failures grow ever higher. Whether humanity can develop governance frameworks adequate to the challenge may well determine whether AI proves a blessing or a catastrophe (Bostrom, 2014).

    References

    Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

    Bradford, A. (2020) The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press.

    Burrell, J. (2016) ‘How the machine ‘thinks’: Understanding opacity in machine learning algorithms’, Big Data & Society, 3(1), pp. 1-12.

    Calo, R. (2017) ‘Artificial intelligence policy: A primer and roadmap’, University of Bologna Law Review, 51, pp. 180-218.

    Dafoe, A. (2018) ‘AI governance: A research agenda’, Future of Humanity Institute Technical Report. Oxford: University of Oxford.

    European Commission (2021) Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Brussels: European Commission.

    Marchant, G.E., Allenby, B.R. and Herkert, J.R. (eds.) (2011) The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight. Dordrecht: Springer.

    OECD (2019) Recommendation of the Council on Artificial Intelligence. Paris: OECD Publishing.

    Smuha, N.A. (2021) ‘Beyond the individual: Governing AI’s societal harm’, Internet Policy Review, 10(3), pp. 1-32.

    Yeung, K. (2018) ‘Algorithmic regulation: A critical interrogation’, Regulation & Governance, 12(4), pp. 505-523.

  • AI and Democracy: Can Democratic Institutions Survive the Algorithm?

    AI and Democracy: Can Democratic Institutions Survive the Algorithm?

    Artificial intelligence poses profound challenges to democratic institutions that have evolved over centuries to mediate human political competition—challenges that make previous concerns about television’s impact on democracy seem quaintly provincial. AI-powered influence operations can microtarget voters with personalised disinformation, algorithmic curation shapes political discourse on social media platforms, and automated systems increasingly make consequential decisions about citizens’ lives with minimal democratic accountability. Meanwhile, the concentration of AI capabilities in a handful of technology companies creates power asymmetries that challenge democratic governance itself. These developments raise urgent questions about whether democratic systems designed for an earlier era can maintain legitimacy and effectiveness in an age of artificial intelligence (Deibert, 2019; Nemitz, 2018).

    AI-enabled election interference represents perhaps the most immediate threat to democratic processes, though calling it ‘interference’ rather underplays the sophistication involved. Micro-targeted political advertising exploits detailed psychological profiles to influence voter behaviour with messages crafted for maximum impact on specific individuals (Gorton, 2016). Automated bots amplify partisan content and suppress opposition voices on social media, creating false impressions of public opinion that influence genuine voters—a manipulation technique that makes traditional propaganda look refreshingly honest by comparison. Deepfake videos can place false words in candidates’ mouths days before elections, leaving insufficient time for debunking before polls close. Foreign actors deploy these tools to sow discord and undermine confidence in democratic institutions, whilst domestic political operatives use similar techniques in the grey areas between persuasion and manipulation (Bradshaw and Howard, 2019; Woolley and Howard, 2018).

    The algorithmic curation of information corrodes the shared reality essential for democratic deliberation—a problem that transcends partisan divides even as it exacerbates them. Social media platforms employ AI to maximise engagement, which in practice means promoting content that triggers emotional responses, particularly outrage and fear (Vaidhyanathan, 2018). This creates filter bubbles wherein citizens encounter primarily information confirming existing beliefs, whilst contrary evidence remains invisible. The result fragments the public sphere into incompatible reality tunnels, making constructive political dialogue increasingly difficult. When citizens cannot agree on basic facts, democratic deliberation becomes impossible—a situation that authoritarians exploit with enthusiasm whilst democrats wring their hands with considerably less effect (Sunstein, 2017; Persily and Tucker, 2020).

    Democratic accountability struggles to keep pace with AI systems making consequential decisions about citizens—decisions that increasingly escape meaningful oversight. Algorithms determine who receives welfare benefits, which neighbourhoods receive police attention, and who gets approved for loans, operating with speed and scale that overwhelm traditional administrative oversight mechanisms (Eubanks, 2018). The opacity of these systems prevents citizens from understanding how decisions affecting them are made, let alone challenging them effectively. Moreover, the private sector develops and deploys much AI technology, creating accountability gaps where neither market discipline nor democratic oversight operates adequately. The fundamental democratic principle that power must be accountable to those it affects faces perhaps its greatest challenge since universal suffrage (Ananny and Crawford, 2018).

    Strengthening democracy against AI-enabled threats requires institutional innovations that governments have thus far proved reluctant to implement. Transparency requirements could mandate disclosure of AI systems’ operation in consequential domains, though balancing transparency against intellectual property protection and security concerns proves contentious. Digital literacy education might help citizens recognise manipulation, though keeping pace with evolving AI tactics presents obvious difficulties. Regulatory frameworks could establish guardrails for AI deployment in democratic contexts, assuming political will sufficient to constrain powerful technology companies—an assumption that current lobbying expenditures suggest may be optimistic. International cooperation could establish norms against AI-enabled election interference, though enforcing such norms faces all the usual challenges of cyberspace governance plus several novel ones. Perhaps most fundamentally, democracies must grapple with whether AI capabilities create power concentrations incompatible with democratic equality, and if so, what structural changes might address this—questions that make previous constitutional crises look straightforward by comparison (Danaher et al., 2017; Balkin, 2018).

    References

    Ananny, M. and Crawford, K. (2018) ‘Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability’, New Media & Society, 20(3), pp. 973-989.

    Balkin, J.M. (2018) ‘Free speech in the algorithmic society: Big data, private governance, and new school speech regulation’, UC Davis Law Review, 51, pp. 1149-1210.

    Bradshaw, S. and Howard, P.N. (2019) ‘The global disinformation order: 2019 global inventory of organised social media manipulation’, Oxford Internet Institute Working Paper.

    Danaher, J. et al. (2017) ‘Algorithmic governance: Developing a research agenda through the power of collective intelligence’, Big Data & Society, 4(2), pp. 1-21.

    Deibert, R.J. (2019) ‘The road to digital unfreedom: Three painful truths about social media’, Journal of Democracy, 30(1), pp. 25-39.

    Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.

    Gorton, W.A. (2016) ‘Manipulating citizens: How political campaigns’ use of behavioral social science harms democracy’, New Political Science, 38(1), pp. 61-80.

    Nemitz, P. (2018) ‘Constitutional democracy and technology in the age of artificial intelligence’, Philosophical Transactions of the Royal Society A, 376(2133), p. 20180089.

    Persily, N. and Tucker, J.A. (eds.) (2020) Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge: Cambridge University Press.

    Sunstein, C.R. (2017) #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press.

    Vaidhyanathan, S. (2018) Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. Oxford: Oxford University Press.

    Woolley, S.C. and Howard, P.N. (2018) Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford: Oxford University Press.