Artificial Intelligence, AI, has been one of the most pervasive topics in global headlines over the past year, with experts both extolling the much-anticipated benefits of the technology and warning of its dangers, the Country Manager, Microsoft Nigeria, Ola Williams has said.
She said, “The result is that in much the same way as people once feared the unpredictable and far-reaching impact of electricity, many in society are now wary of the unknown risks they associate with AI. It’s a conundrum that has left many business leaders in Africa asking themselves how they take advantage of the opportunity presented by AI while still avoiding any potential pitfalls.
“There can be little doubt that recent advances in AI have forever changed the way we work, innovate and create. So much so that a future, in which every person has an AI Copilot for everything they do, is no longer out of reach. These virtual assistants will augment the work that people do by freeing up time for more creativity, imagination and human ingenuity.
“The very name “Copilot” speaks to the role AI will ultimately play in society, not acting as the pilot or on autopilot, but as an assistant to people – elevating their job functions rather than replacing them.
READ ALSO: Microsoft announces new services, updates to help accelerate AI transformation
Microsoft’s research among professionals using Copilot paints a picture of how AI will come alongside users rather than taking over their jobs, with 70 percent of those surveyed saying they are more productive and another 77 percent saying they wouldn’t want to give their AI assistant up.
Already, AI is enabling faster and more profound progress in nearly every field of human endeavor and helping to address some of society’s most daunting challenges—like providing those living in rural areas with access to healthcare and helping farmers increase their productivity levels to provide for our growing population.
In fact, it’s estimated that AI could expand the economy as much as 50 percent of current GDP by 2030 if the continent could capture just 10 percent of the global AI market.
But as with other great technological innovations in the past, the use of AI will have significant implications for society.
This is a big part of the reason why Microsoft expanded its AI for Good Lab into Africa – not only to drive investment in local AI skills and capacity – but to work towards greater access to and inclusivity of AI through the Africa AI Innovation CounciI. The work done through Microsoft’s AI Lab is informed by the Council, which is comprised of members from leading African organisations who deeply understand the issues facing the continent.”
She said, “As they look to develop and implement AI, organisations across Africa should also be asking themselves how they ensure they are creating AI systems that generate a positive impact for everyone.
To develop a strong AI governance system, they can begin by establishing guiding principles. Through our own learnings, Microsoft has formulated six principles we believe should guide AI development.”
Ensuring that AI systems treat everyone fairly and without any bias begins with people understanding the limitations of AI predictions and recommendations. Though AI can provide helpful suggestions, at the end of the day final decisions must be made by an accountable person.
In the same way, the developers designing and building these AI systems need to understand how bias might affect the final solution. They can then mitigate that bias by using diverse datasets to train AI models so they can learn and evolve without developing prejudices.
It’s essential that AI systems operate reliably, safely and consistently, not just under normal circumstances but during unexpected situations too. How AI ultimately behaves is generally determined by the range of circumstances developers anticipate during design and testing. It’s therefore critical for developers to ensure AI can handle even unanticipated situations by employing rigorous testing.
READ ALSO: 10 AI terms everyone should know–Microsoft
Privacy and security
As AI becomes more pervasive, protecting privacy and securing important personal and business information is becoming increasingly important and more complex. It’s critical to ensure that AI systems are compliant with privacy laws that have specific requirements around the way in which data is collected, used and stored.
Inclusiveness
For everyone to benefit from AI, it must incorporate and address a broad range of human needs and experiences. AI has huge potential to improve access to a wide range of essential services such as education and healthcare. But in order to realise this potential, developers need to adopt inclusive design practices, whereby they address aspects of the product environment that could unintentionally exclude people.
Transparency
When AI systems are used to help inform decisions that have tremendous impacts on people’s lives, it is critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy. A crucial part of transparency is what we refer to as intelligibility, or the useful explanation of the behavior of AI systems. Improved intelligibility means that stakeholders can now understand how and why AI systems function and identify potential concerns, such as bias or privacy issues.
Accountability
To ensure that AI systems are not the final authority on decisions that impact people’s lives, organisations should develop accountability norms based on industry standards. They should also consider establishing a dedicated internal review body. This body can provide oversight and guidance to the highest levels of the company on which practices should be adopted to help address potential concerns around AI.
It’s thrilling to think of the potential benefits that will come with a future powered by AI. But it would be misguided of us to focus only on the benefits without being clear on the challenges. While we may not be able to fully predict the future yet, it’s our responsibility to make a concerted effort to anticipate and mitigate the unintended consequences of any AI solutions we release into the world through careful planning and oversight.