Human-Centered AI: ‘The future of work lies in the collaboration between humans and AI’

By Amber Dyer, Coordinator, Communications & Marketing

Average reading time: 2 minutes

On Thursday, Aug. 21, at the DRC’s Alliance for Opportunity and Impact event hosted by Improving, a panel of AI experts examined how organizations can leverage AI to drive responsible, impactful action while exploring whether these technologies can help mitigate inherent biases.

The DRC’s Alliance for Opportunity and Impact event featured a panel of experts.

The future of AI must be human-centered.

“The future of work lies in the collaboration between humans and AI,” said Latosha Herron-Bruff, DRC Senior Vice President of Opportunity and Impact. “Technology enhances our natural abilities, allowing us to think more strategically and creatively.”

Companies looking to integrate AI into their practices must strike a balance between fostering human-AI partnerships and dissuading over-reliance on automated systems to make decisions.

“[Employees shouldn’t] just rely on algorithms to give them the answer,” said Shuchi Agarwal, Technical Director at SMBC Group. “They are enhancers to help [employees] do the work faster and more efficiently.”

Because of this, organizations should offer comprehensive training when employees express interest in using AI tools, as human oversight can lead to the perpetuation of algorithmic biases.

“Even in traditional machine learning systems, AI is all about historical data,” said Michael Slater, Technical Director at Improving. “If you have inherent biases or direct biases in that historical data, it does show up in AI, depending on how you use it.”

This reality underscores the need for companies to establish ethical AI frameworks.

AI frameworks can help mitigate inherent biases.

“Building ethical practices all comes down to training and understanding the pros and cons [of AI] and having appropriate governance,” said Agarwal. “So even before implementing any kind of technology, any kind of AI inside your workforce, have governance, have a set of principles about dos and don’ts.”

Effective governance involves establishing safeguards to ensure employees understand the implications of AI in their work.

Michael Slater, Suchi Agarwal, and moderator Maiya Winston discussed the future of AI.

“So, if people are trying to abuse it, misuse it, you should know about that in your organization,” said Slater. “Reporting, auditing usage, automated testing of AI systems, all of that goes into good governance.”

Cross-functional teams are essential for comprehensive AI governance.

In addition, companies can form AI governance councils made up of individuals from various departments to identify and lessen potential biases in the prompt creation process.

“So, even if you think: ‘this is the best algorithm that can solve all our problems’, send it for review to different kinds of people,” said Agarwal. “As many perspectives as we can add. The richer we can make our data, the better the output.”

Successful implementation depends not only on the technology itself, but on prioritizing a human-centered approach that places employees’ needs first.

“[When used responsibly,] AI can make everyone better,” said Slater. “Some people say that it lowers the barrier to entry, and I think it’s really brilliant to think about it that way.”