DigITal Magazine Snapshot – Interview with Ashley Casovan, Executive Director at AI Global

Ashley is an engaged and innovative leader who has always had a deep interest in advancing the public good. Recently leaving her long-standing career in the public service where she was last Director of Data and Digital for the Government of Canada, she has now taken on the role of Executive Director of AI Global, a non-profit dedicated to creating practical tools to ensure the responsible use of AI. Throughout her career she has worked at the intersection of innovative technology and data, and its impact on providing better information and services.


Q: AI Global launched an open-source Responsible AI Design Assistant to help organizations globally who are designing and developing AI. What did it take to develop a unified assessment to assure responsible design, development, and deployment of AI?

A: It was a lot of work! Many throughout this process told me that it is impossible however, the work that I led within the Government of Canada to build the Directive on Automated Decision Making Systems provided me with a good understanding of what guardrails practitioners needed in order to know what it means to be ethical or responsible in the context of AI systems or solutions.

We started by reviewing over 100 Responsible/ Ethical AI frameworks, principles, policies, etc. We analyzed them for commonalities, and best practices we knew from experience were important to highlight. This unified framework resulted in five high-level categories: Accountability, Data Quality, Bias and Fairness, Explainability and Interpretability, and Robustness. Next, we reviewed these documents in detail and extrapolated the recommendations and principles, and converted them into measurable evaluation.

Once our team did the initial draft of the Design Assistant, we worked with our partners, experts from industry, academia, civil society, and government to test and validate the tool. We continue to work with subject matter experts from diverse backgrounds, organizations, and regions of the world to refine and mature the Design Assistant. It was important for us to release this as an open source tool as we want it to be as easy as possible to design AI systems in the most ethical and responsible way possible, and help those building these systems think through some of these questions from the start of their project.

Q: The Design Assistant can be used by organizations implementing the National Standard of Canada, CAN/CIOSC 101:2019 (Ethical Use and Design of Automated Decision Systems), to form part of their framework in managing ethical risk in the design and use of automated decision systems. What advice could you share with CIOs planning to deploy AI solutions to enhance workplace safety as public and private sector organizations re-open with loosened COVID-19 restrictions?

A: Section 4.1.10 of CAN/CIOSC 101:2019 notes that “[a]n ethical impact assessment should form part of the framework to manage ethical risk in the design and use of automated decision systems.” The Design Assistant is a best of breed ethical risk assessment, it can be used and adapted for an organization’s specific use if needed.

Presently, the questions in the Design Assistant are applicable to all types of AI systems in all scenarios. As we mature this tool, we look to integrate industry and region-specific questions. We would be happy to work with Council members to adapt it for their purposes.

Q: In recent years, industry leaders and researchers have cited key challenges and risk mitigation strategies for the adoption of AI systems. How does the Design Assistant consider these challenges and risks in its criteria?

A: To develop the Design Assistant, we looked at leading strategies and reports for the responsible use of AI systems and did substantive analysis on what not only the common recommendations were, but emerging best practices and standards for mitigating risks to people, planet, and business. The research helped us develop a tool that makes it easier to navigate the complex and vast landscape of responsible AI, and hopefully, help makes it easier to know how to design AI systems ethically from the start.

One important lesson learned from my work with the Government of Canada was that a lot of these best practices are not necessarily new, or just for AI systems, so we also looked at best practices for risk mitigation in tech and other industries and incorporated them into our evaluation. For example, AI requires the use of high-quality data to make accurate determinations. Data quality is not a new concept, as such, we have extracted from long-standing standards like ISO 8000 to evaluate the efficacy of data quality.

Q: AI Global has been working with partners such as the CIO Strategy Council in developing this open-source version of the Trust Index to help the community while also continuously informing your work. What do you believe is the role of partners in helping to advance the adoption of responsible AI solutions?

A: Yes, CIO Strategy Council has been a great ally in these efforts as we have a shared vision to support the development and adoption of technology that is designed in a responsible way. Given that the Council sits at the intersection of industry and government, it is imperative that we leverage their members’ knowledge and expertise to develop tools that are needed and work. From our perspective, there is no longer a need to develop more ethical AI principles and frameworks, what we need now is to operationalize those principles. Members of the Council can give us useful insight into how these types of evaluations work in practice. While we have released this open source tool, the Design Assistant, we are working on the development of a certification program that will be informed by feedback we receive through this tool. In my experience, it’s best to build policy with those who are using it. With that in mind, AI Global will be inviting members of the CIO Strategy Council to a workshop to learn more about the Design Assistant and to provide an opportunity to give feedback.

DigITal Magazine, Issue #3

This article was initially published in the CIO Strategy Council’s member-only magazine. To access the magazine and other member-only materials and information, please contact us to become a member.

Share This Article

Scroll to Top

This website uses cookies to improve your experience. By using our website you agree to our Cookie Policy

This website uses cookies to improve your experience. By using our website you agree to our Cookie Policy