Beyond Frameworks: Alternative Pathways for AI Governance in a Fragmented World
In this, our first blog-style article on issues in International Studies and Emerging Technologies, Dr. Alp Cenk Arslan provides some fascinating reflections on alternative mechanisms that can supplement AI governance frameworks, that are based in core principles of community engagement, support, and respect.
Dr. Alp Cenk Arslan is an assistant professor specializing in security and intelligence studies at the Turkish National Police Academy. His research focuses on the intersection of techno-politics, intelligence studies, and communication studies, exploring how emerging technologies influence both national security strategies and the dynamics of public discourse. He is the author of Intelligence Analysis: Reproduction of Information, which examines the processes and methodologies behind intelligence production and analysis. Dr. Arslan’s second book, The Age of Techno-Political: State-Corporate Relations and the Construction of Security, delves into the evolving relationship between technology and politics, offering a comprehensive framework to understand the implications of digital advancements on statecraft, governance, and national security. He is also an active speaker at international conferences.
This piece serves as a fantastic opening work for ISET's new short-form articles, and we thank Dr Arslan for his contribution.
Image: “Artificial Intelligence Micro Chip” by Rostislav Kralik, released under a CC0 Public Domain License.

In the ongoing hyping discourse on AI governance, the spotlight often falls on statutes, standards, and institutional roadmaps. These formal frameworks are crucial, yet they advance at a glacial pace while AI's societal impacts evolve with lightning speed. It's time to broaden our perspective. We must not fixate solely on codified structures. Instead, we should embrace framework-external approaches, strengthening local epistemic resilience, cultivating informal trust networks, and translating cultural or regional practices into living governance. These "outside the walls" mechanisms don't supplant global initiatives. They complement them, particularly in regions where formal capacity is limited or contested, by providing adaptive, bottom-up solutions.
Why go beyond frameworks?
Formal frameworks remain indispensable for establishing minimum thresholds, such as transparency, safety, and liability. However, they grapple with three enduring gaps. First, latency: the policy cycle, consultation, drafting, legislation, and implementation, inevitably trails AI's rapid deployment cycles. Second, context: universal rules seldom account for local histories, power dynamics, or linguistic nuances. Third, trust: paper compliance doesn't inherently foster lived legitimacy, trust often arises elsewhere, in communities, professional networks, and culturally rooted protocols.
What bridges these gaps are governance practices that thrive in the wild. Community-run datasets, deliberation tools embedded in civic life, data cooperatives, worker organizing, and culturally-grounded data rules are just some examples. I will try to analyze seven international cases that illustrate on how these "framework-external" mechanisms generate de facto governance, often more swiftly and with greater legitimacy than statutes alone.
Local epistemic resilience: when language communities build their own AI
Consider local epistemic resilience, where language communities build their own AI. The African NLP community Masakhane organizes researchers and volunteers to develop language resources and models "for Africans, by Africans." In 2025, it launched the African Languages Hub, a platform and funding mechanism to catalyze data, models, and use cases for dozens of under-served languages. The Hub explicitly ties technical progress to social aims, countering colonial legacies in language technology and creating local expertise that policymakers can engage.
A parallel lesson emerges from Mozilla Common Voice in Rwanda, where community partnerships, including the startup Digital Umuganda, crowdsourced thousands of hours of Kinyarwanda speech, marking one of the largest single-language contributions on the platform. Beyond dataset scale, the key outcome was ownership. Local contributors, events, and apps that made voice tech feel native rather than imported.
The governance takeaway is clear. When communities produce and steward their own data and models, they build epistemic resilience, the capacity to question, evaluate, and redirect AI according to local needs. Regulations can mandate guardrails, but legitimacy grows when people see themselves in the dataset and in the research network.
Trust networks against misinformation: Taiwan’s citizen-driven infrastructure
Next, trust networks against misinformation, exemplified by Taiwan's citizen-driven infrastructure. Cofacts is an open-source, volunteer-run fact-checking network in Taiwan that operates inside the popular LINE messaging app. Users forward suspicious claims to a bot. Responses draw on a public, growing knowledge base maintained by volunteers and connected actors. Because the corpus and API are open, third-party bots and editors can extend the service, creating a federated trust network rather than a single authority.
For policy deliberation, the vTaiwan process uses Pol.is to surface areas of rough consensus among citizens and stakeholders. Outputs inform public policy and, in several cases, legislation. The critical design choice is not technocratic filtering but visible consensus discovery, which places legitimacy in the pattern of agreement rather than the prestige of any single voice.
Taiwan's mask map during COVID-19 exemplified "civic tech + open data + light-touch state" collaboration. Releasing real-time pharmacy stock via API allowed civic hackers to build dozens of apps that routed citizens to available masks, strengthening social trust through transparency.
The governance takeaway here is that in high-velocity information environments, open infrastructures, volunteer labor, gentle public facilitation can outrun formal processes, and earn more trust.
Cultural protocols as governance: Indigenous data sovereignty
Cultural protocols also serve as governance, particularly in Indigenous data sovereignty. The CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) assert that data about Indigenous peoples should be governed by those communities, complementing FAIR with a people-and-purpose orientation. CARE reframes "open data" through relational obligations and rights to self-determination.
In Aotearoa New Zealand, Te Mana Raraunga (the Māori Data Sovereignty Network) articulates that Māori data should be subject to Māori governance, with charters and briefs clarifying rights, accountability, and Treaty-based obligations.
The Indigenous Protocol and AI position paper extends this lens to AI design itself, asking what relationships and responsibilities AI should instantiate if built from Indigenous worldviews. It is a practical call to embed ethical reciprocity into the very architecture of systems, not merely the compliance checklist.
In terms of governance, cultural protocols are not "soft add-ons." In many contexts, they are the only workable governance because they bind data uses to community authority and collective benefit.
Urban data commons and cooperatives: Barcelona’s experiments
Urban data commons and cooperatives offer further insights, as seen in Barcelona's experiments. Barcelona's DECODE pilots integrated cryptographic tools and smart contracts with Decidim (the city's open-source participation platform) so residents could granularly control who sees what data and on what terms, a practical instantiation of "data as a commons."
Salus.Coop, a citizen health data cooperative, developed "data donation" mechanisms so individuals could contribute health data for research through a cooperative they control, including a COVID-era collaboration with the Vall d’Hebron Research Institute. This reframes data subjects as members with governance rights, not just consent-givers.
Cities and cooperatives can act as meso-level stewards, translating abstract rights into tools, dashboards, and contracts that ordinary people can actually use. That creates enforceable norms even before national rules catch up.
Labor at the core of governance: Kenya’s content moderators and data workers
Labor must be at the core of governance, as demonstrated by Kenya's content moderators and data workers. AI depends on human labor, data labeling, content moderation, often outsourced to the Global South. In Kenya, a series of lawsuits by Facebook (Meta) content moderators challenged poor working conditions and argued that Meta could be sued locally despite being a foreign company. Kenya's courts confirmed jurisdiction, a landmark step towards supply-chain accountability in digital labor.
Related proceedings have pressed Meta on broader harms (including a case linked to violence in Ethiopia), further signaling that worker and public-interest litigation can shape platform behavior even in the absence of bespoke AI statutes.
No AI governance is complete without labor governance. Organizing, litigation, and public scrutiny create external levers that drive improvements across opaque AI supply chains.
Civic-legal brakes on biometrics: Latin American cases
Civic-legal brakes on biometrics provide another pathway, as in Latin American cases. In Buenos Aires, a judge suspended the city's live facial recognition system (existent 2019–2022), later declaring its operating conditions unconstitutional due to data protection and accountability failures. This court-led intervention halted a powerful surveillance infrastructure on due-process grounds.
In São Paulo, courts ordered the metro operator to stop using facial recognition / detection tech embedded in advertising screens, citing consent and data protection concerns, again demonstrating that strategic litigation can create immediate constraints even before sector-specific laws mature.
Meanwhile, Brazil's data protection authority ANPD opened a public consultation in June 2025 on biometric data rules, a signal of institutional learning accelerated by civic pressure.
Where risk is high ,and statutory clarity thin, courts, watchdogs, and public consultations function as a modular brake, a de facto governance loop that can pause deployments, force transparency, and trigger better rule-making.
Community norms for automation: Wikipedia and OpenStreetMap
Finally, community norms for automation, as practiced on Wikipedia and OpenStreetMap. On Wikipedia, the Bot Approvals Group (BAG) oversees a public, step-by-step process for greenlighting bot tasks. Operators must submit a request, conduct a monitored trial, and iterate based on community feedback. This "lightweight yet real" gatekeeping yields a replicable pattern. Graduated permissions, observable trials, and revocability…
In OpenStreetMap (OSM), AI-assisted editors like Rapid (formerly RapiD) surface machine-suggested features, but the Organised Editing Guidelines and community review norms govern how bulk or corporate-backed edits proceed. The norms insist on transparency, local engagement, and responsiveness to feedback, again, social checks wrapped around automated contribution.
Open communities show how to align automation with human oversight without heavy statute. They publish norms, require trials, empower moderators, and make accountability social and visible.
What makes these "framework-external" approaches effective?
Across these cases, five design principles recur. First, subsidiarity, governance near the problem. Language communities, cities, co-ops, and worker groups keep decisions close to where data is produced and harms/benefits are felt. Second, operationalized trust. Open APIs, transparent trials, versioned datasets, and public logs make trust auditable, not just aspirational (Cofacts' open database and API, BAG's public trials, OSM's change-by-change review). Third, cultural legitimacy. CARE and Māori data charters translate rights into locally meaningful authority, which increases adoption and compliance. Fourth, labor as governance. Worker organizing and litigation alter incentives for platforms and vendors, injecting social costs into unsafe practices. Fifth, experimentalism. Pilots like DECODE or data donation schemes like Salus.Coop show "minimum viable governance". It means testable modules that can scale once proven.
These principles don't bypass the state, they equip institutions with grounded partners, tested practices, and credible signals about what works.
A pragmatic playbook for policymakers and practitioners
Here the message is clear for policymakers and practioners. First, local language capacity should be funded. Public money should be allocated to community-led language datasets and research hubs. Public-benefit licensing for outputs should be required to maximize reuse and legitimacy. Second, community fact-checking should be plugged in. Volunteer fact-checking networks should be treated as infrastructure, data access and light-touch grants should be offered, and integration points with public information channels should be ensured.
Third, rough-consensus tooling should be institutionalized. Pol.is-style methods should be used for contentious policy areas. Outcomes should be connected to formal drafting timelines so participation matters. Fourth, community-led data protocols should be adopted. Where Indigenous or local communities are data subjects, CARE-aligned agreements should be required and “Authority to Control” should be demonstrated in procurement and grants. Fifth, data commons and co-ops should be backed. Municipal data wallets, consent ledgers, and citizen stewardship pilots should be budgeted for. Impacts should be evaluated ex-ante and scaled when effective.
Sixth, labor baselines in AI supply chains should be set. Mental-health support, pay transparency, and rights to organize should be built into contracts for moderation and labeling vendors. Litigation risk is a governance signal and should be heeded. Seventh, fast brakes for biometrics should be created. Internal pathways for moratoria when courts or DPAs flag risk should be established. Technical notes and assessment criteria should be published before restart. Eighth, open-community norms should be translated. BAG/OSM patterns—trial periods, community review, revocable permissions—should be borrowed for high-impact algorithmic deployments in the public sector.
Ninth, auditability should be made a default. Open APIs or third-party access to logs for public-interest oversight should be required. Visibility should be treated as a design feature, not an afterthought. Tenth, the wins should be networked. Successful models (for instance, Cofacts forks, data-co-op blueprints) should be packaged and regional replication through peer-to-peer learning should be supported.
Conclusion: frameworks are necessary, but not sufficient
Frameworks are necessary, but not sufficient. Formal regulation, from procurement standards to comprehensive AI laws, is vital for setting baselines and redress mechanisms. But lived governance often emerges outside formal frameworks, where communities maintain datasets, volunteers fact-check at scale, cities pilot data commons, workers push for accountability, and courts impose immediate constraints on risky deployments. Masakhane and Common Voice show how local knowledge production changes who AI serves. Cofacts and vTaiwan demonstrate how trust can be engineered through openness and consensus discovery. CARE and Māori charters root data rights in culture and authority. Barcelona's DECODE and Salus.Coop translate rights into everyday tools. Kenyan litigation reminds us that labor is part of safety. Wikipedia and OSM reveal how communities can domesticate automation with social norms and trials.
The path forward is not "frameworks or alternatives" but frameworks and alternatives, stitched together. If institutions invite, fund, and learn from these framework-external practices, we can build AI governance that is faster, fairer, and more culturally legible. That is the only way to keep pace with the systems we're trying to govern, by governing where people already live, speak, work, and trust.