Building AI you can trust – IT World Canada

[ad_1]
It’s been almost five years since tabloid headlines began to grab our attention in an attempt to warn us of the impending overhaul of modern society at the hands of artificial intelligence (AI). However, while we are still not transported by self-driving cars, the reality is that automated decision-making systems are widely used in a wide variety of areas, from loan processing and hiring to hiring the employee. customer service and product quality control. .
While global businesses are keenly aware of the importance of having trustworthy AI, the consensus is that AI maturation has been slower than expected due to challenges in its commercialization, including lack of skilled talents, the acquisition and cleaning of data privacy protection. practices and, perhaps more importantly, the need to promote greater societal confidence in technology. Reliable and explainable AI is essential for businesses. The Global AI Adoption Index 2021 reports that 91% of companies using AI say their ability to explain how they came to a decision is essential.
The Canadian AI ecosystem in particular has long argued that trust in AI is essential. Canada boasts of one of the most respected research communities in the world, credited with numerous scientific discoveries and innovations that have led to revolutionary advances in AI technologies. However, Canadian companies have been particularly slow to implement and commercialize AI solutions, from an abundance of “classic canadian caution‘which is underpinned by a lack of confidence in AI.
Trust is the foundation of AI and it becomes essential that people can trust the process and results of this AI. A 2019 Roundtable of AI experts, for example, highlighted explainability, biases and diversity as areas of interest for financial institutions as they adopt and develop best practices on the responsible use of AI. Likewise, a recent survey by the IBM Institute of Business Value cites that The explainability of AI is increasingly important among business leaders and policy makers.
With the tabling by the federal government of Bill C-11 or the Digital Charter application law in the previous parliamentary session, he also recognized the essential role that policy and regulation must play. While the bulk of the bill consisted of a legislative scheme governing the collection, use and disclosure of personal information for commercial purposes in Canada, for the first time it also included provisions for the regulation of AI or autonomous decision-making systems.
As stated in Digital Charter Implementation Law, 2020, âCompanies should be transparent about how they use these systems to make predictions, recommendations or important decisions about individuals. Individuals would also have the right to ask companies to explain how a prediction, recommendation or decision was made by an automated decision-making system and to explain how the information was obtained.“
Explicability, fairness, transparency, robustness and confidentiality are the pillars of a foundation on which reliable and responsible AI can be built.
As Canadian organizations struggle to bring AI systems to market in the hopes of reaping the promised benefits, it will be important to think about how to develop such capabilities. In pursuing these goals, organizations should consider the following.
One explanation does not suit everyone
Beyond the technical challenges related to the explanation of AI systems in the black box, the explanations will differ depending on the character in question. For example, the The data scientist striving to improve the accuracy of the model requires different metrics of explainability, compared to the loan officer explaining to his client why a loan request was refused, or to the regulator who must prove that a system does not discriminate.
There are different approaches
There are a variety of techniques that can be used to explain machine learning models. One approach, for example, considers directly interpretable models such as decision trees, Boolean rule sets, and generalized additive models that are inherently easy to understand, while post hoc techniques first form a black box model. , then build another explanatory model on top of the black model. box model. Another approach is global versus local explanations to compare the behavior of the entire model versus explanations for single sample points.
The role of transparency
Users want to understand how a service works, its features, strengths and limitations. Transparent AI systems gain confidence when they share the data collected for training, how it will be used and stored, and who has access to it. They clearly explain their purpose to users.
It’s no longer enough to just strive for precision when building AI systems. The fundamental pillars of trustworthy AI must be nested into the design of the system from the start, not bolted after the fact. It is essential that organizations carefully consider the platforms on which they build and deploy algorithms and autonomous decision-making systems, to ensure that all aspects of the AI ââpipeline are supported. Only then will we see truly life changing AI systems built and deployed with confidence that we can place enough trust in.
[ad_2]