The past few years have seen a spike in interest in Artificial Intelligence (AI). As a result, AI is becoming ubiquitous globally, from self-driving vehicles to robot-assisted surgery. Conversely, its usage in Uganda remains nascent, though its powerful capabilities will soon drive widespread adoption. Despite its benefits, a force as powerful as AI ought to be strictly regulated by law because of its potential for misuse. The remainder of this article explains the rationale for regulation.
Ensuring transparent and accountable decision-making.
AI systems are often criticised for the opacity of their decision-making, indicating that we do not understand the reasoning behind their decisions. This is colloquially referred to as the “Blackbox problem”. Gradually, AI is assuming greater decision-making roles. For example, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a computer-based algorithm currently used in criminal cases in New York, Wisconsin and California to predict a defendant’s likelihood of recidivism. Examples such as COMPAS highlight the need for regulation that requires developers to design AI technologies whose reasoning can be adequately understood and explained. This is crucial because AI will gradually replace human decision-makers in various sectors that have a profound bearing on people’s lives. Explainable decisions will build trust and ease the process of apportioning blame if an AI system malfunctions or causes harm, paving the way for corrective mechanisms.
Ethical concerns.
AI’s enormous potential should be channelled towards benefiting rather than harming society. Globally, AI is being deployed in early disease detection and diagnosis, precision farming, disaster response, assistive technologies for people with disabilities and more. Conversely, AI is also being deployed to create false news stories, impersonate people, generate pornography, and undermine democracy. Regulation should ensure that AI is deployed solely to enhance human life. A robust legal framework ensures that ethical concerns, such as the enhancement of human life, are incorporated at every stage of development.
Safety for the purpose.
Increasingly, AI is becoming a key part of various technologies. For example, in Autonomous vehicles, AI performs several functions such as perception, computer vision, behavioural control, and more. In aviation, AI monitors weather, routes and fuel use, and checks wear and tear on the plane. In the medical field, AI is deployed in robotic surgery across various specialities. Eventually, the use of AI in various technologies in Uganda will become commonplace. Legal regulation must establish safeguards that significantly reduce the possibilities of failure and malfunction in AI technologies. These include mandatory pre-deployment safety testing, human-in-the-loop oversight, fail-safe mechanisms, regular post-deployment monitoring and audits, liability and accountability protocols, and public reporting and incident disclosure. These measures would ensure human safety when using AI-assisted technologies.
Ensuring fairness.
AI relies on the data that it is trained on. From this data, it learns patterns which form the basis of new output. Accordingly, the training data must be comprehensive. To illustrate this, imagine an AI tool being trained to scout young football talents is fed thousands of player profiles, videos, statistics like goals and assists, and scouting reports. With these, the AI learns to discern traits of promising players and recommends whom to sign. If the training data only includes players from Africa, the AI system will learn a narrow view of talent, potentially ignoring and excluding skilled players from other continents. Regulation must emphasise that AI be trained on diverse and representative data to eliminate the chances of discrimination or exclusion.
Promoting fair competition.
Companies or individuals with access to vast resources such as massive amounts of training data, advanced computing power, and specialised talent could monopolise the AI field, thereby stifling competition and funnelling wealth to a few. Regulation could establish parity through: (a) robust competition laws that prevent predatory pricing and unfair acquisitions; (b) open standards and interoperability ensuring compatibility across platforms and preventing large companies from locking users into proprietary ecosystems; (c) provision of subsidies and grants (d) giving access to shared computing infrastructure for smaller companies; and (e) offering tax breaks to smaller firms. These measures could facilitate fair competition and equitable resource access.
Protecting job markets.
AI has supplanted numerous human roles worldwide as employers continue to lay off workers and replace them with AI, which performs multiple tasks quickly and cost-effectively. For instance, in the US in 2025, thousands of jobs were cut and replaced by AI. Job listings for entry-level corporate jobs also declined by over 15%. Others speculate that as many as 14 million jobs worldwide will be lost due to the rise of AI. Uganda’s AI usage does not pose an immediate threat to the labour force. However, in the future, this could change with the greater adoption of AI-assisted technologies. Therefore, the regulation’s approach should be human-centric, focusing on training and certifying Ugandans for new AI-based roles and ensuring that AI complements rather than replaces human labour where feasible. Without regulation, an uptake in AI usage in the labour industry could worsen Uganda’s unemployment problem.
Fuelling innovation.
It is often argued that regulation stifles innovation. Perhaps because “regulate” is synonymous with the idea of restriction and raising barriers that inhibit innovation. However, good legislation establishes guardrails that protect people while encouraging the flourishing of ideas beneficial to society. AI regulation must harmonise the need to safeguard Ugandans whilst stimulating innovation through: (a) establishing regulatory sandboxes to allow firms to test AI innovations in a real-world setting, akin to Singapore’s Fintech Regulatory Sandbox; and (b) promoting Public-Private Partnerships which combine government oversight and resources with the private sector’s expertise and skills.
In conclusion, AI is rapidly changing the world around us. Its use in Uganda is still embryonic, which is likely to change in the years to come. As more Ugandans discover its potential and the myriad uses to which it can be put, its uptake will surge. When that time comes, Uganda must have a legal framework in place to channel its growth and usage.
Nyombi Solomon Ignatius,
Nyombi & Company Advocates