Canadians are divided on AI. PWC’s 2024 Hopes and Fears Survey found that 52% of Canadian employees believe generative AI will increase bias in their organization that impacts them.
Whatever the general public may think of AI, businesses are on board, seeing the great potential of AI for optimizing existing systems and creating new ones. AI is no longer viewed as simply an option for most industries, but as an inevitability.
As businesses with the agility and budgets lead the charge on the development and implementation of AI, and others begin to follow, it can be easy to move at the speed of innovation without viewing AI through a human lens. In this spirit, panelists convened at the 2024 Catalyst Honours in Toronto on 7 October 2024 to discuss “Shaping an Inclusive Future Through Generative AI.”
Kathleen Taylor, Chair, Element Fleet Management, Altas Partners, and The Hospital for Sick Children, spoke about the portfolio of organizations she works with, saying they are generally optimistic about an AI-enhanced future of work. “There’s such enormous opportunity associated with all of this,” she said.
This sentiment was echoed by discussion moderator David Morgenstern, President, Accenture Canada, who said, “We published a paper at Microsoft this spring that said even traditional Canadian adoption [of AI] would add the equivalent of an insurance or retail sector to Canada.” That translates to an annual economic windfall of $180 billion in labor productivity gains by 2030.
AI can assist in innovation for good
Panelists then shifted their focus to implementation. Jennifer Freeman, CEO of PeaceGeeks, highlighted AI’s impact on bridging the gaps between users and government. “We’re really utilizing AI in our digital tools as an equalizer,” she said.
For immigrants, there are many barriers (like language and technology) to accessing resources. PeaceGeeks partnered with Accenture to create an AI virtual career coaching platform that can help people practice interviewing, soft skills, and job matching for permanent residency.
As of 7 October 2024, the platform, which rolled out in June, has already had over 100,000 unique users.
AI can perpetuate current biases
Panelists agreed that a healthy dose of caution is needed when creating platforms that rely on AI. Pamela Pelletier, Country Leader & Managing Director, Canada, Dell Technologies, said, “It’s all about the data. Garbage in, garbage out. If you have data that is skewed or that is biased, then you’re going to have a problem.”
She gave the example of AI chatbots, which can “hallucinate”. “Where does ChatGPT get their information from? Twitter, whatever, all these places. So, the data itself is potentially biased.”
Aneela Zaib, Founder & CEO, emergiTEL, said, “The fact is that the LLM (large language model) models that we have on our hands currently, we don’t know how they’re trained or which data they are trained on.”
AI must be fed the right data
What can be done to combat this issue? How can we prevent the same blind spots in the future?
Zaib is already working on solutions. “One of the ways we have tried to overcome [this] is we have fine-tuned these models based on the dataset that are inclusive in nature already. So, when you give a dataset to this tool which is already inclusive and you define set prompts (and there’s a lot of detail…that we can go crazy over), the bottom line is that you have to be very careful in using these systems, giving them the guardrails, and at the same time auditing these systems at the end with the results that you are getting.”
Her company’s tools deliver diverse job candidates to client companies, and AI is part of those tools. When they audit the results that AI delivers, if they find a job or a skillset that isn’t being filled by candidates in certain communities, they analyze those results, fine-tune them, and run the AI model again. Ongoing stewardship of these AI tools and models is part of the process, especially when the work is so important. Zaib said, “Diversity is a practice that we have to do every day.”
DEI and AI can work together
Pelletier echoed this sentiment, saying, “We have the opportunity with the tech that exists now to actually bring people in who have been historically forgotten or left behind.”
Organizations that use AI must do their part. Pelletier said, “As an organization, we have the responsibility when we’re training models to have that data reflect the values we have as an organization. So, it is really important that we take that data and we curate the data to reflect those values… and then we’ll have a very positive outcome.”
She added that DEI and AI can and should work together to further humanity’s best interests. “We need to have our DE&I representation, those folks, at the table at the beginning as we implement the AI projects. And they need to have the ability or authority to hit that pause button or that stop button. If one were analyzing the data, if something is inappropriate, they can pause that and go and correct it,” she said.
Be intentional with AI
As organizations rush to add AI to various aspects of their businesses, it’s important to take the time to do it correctly. If a company doesn’t have high customer service traffic, it probably doesn’t need an AI chatbot. If a company has a large marketing department, it probably doesn’t need to train AI to write articles. And if a company cares deeply about its values, it shouldn’t license an LLM trained on biased datasets.
Taylor summed it up perfectly: “We can make this work, as long as we’re building capably, testing well, utilizing but then coming back around and making sure that what comes out the other end is exactly the outcome we would have hoped for, whether that’s a new recruit, a system that’s delivering a new customer offering, whatever it may be.”
Want to know about next year’s Catalyst Honours? Sign up now and we’ll email you when registration opens.