By Amber Dyer, Coordinator, Communications & Marketing
Average reading time: 3 minutes
At the Dallas Regional Chamber’s 2025 Convergence AI Dallas conference, presented by Accenture and Google, Amy Blankson from the Digital Wellness Institute stressed that responsible AI implementation demands corporate action that extends well beyond executive leadership discussions.
Blankson, who co-founded the Digital Wellness Institute and serves as its Chief Evangelist, noted a critical gap between intention and execution: “I think the problem is that we’re so busy talking about the need for responsible AI that we’re not actually doing it.”
To bridge this gap, Blankson shared five insights that organizations can use to bring responsible AI into everyday practice.

1. Implement a human-centric design focused on user needs, not just features that sell.
“Designing with humans in mind sounds so simple, sounds so obvious, but so often we are trying to make our benchmarks, to hit certain sales outcomes by creating certain features that sell, and it doesn’t necessarily mean that we’re designing for the humans,” said Blankson.
Implementing a human-centric design involves focusing on user needs rather than just market trends. In doing so, companies can foster trust and loyalty among users.
2. Develop ethics by design using cross-functional teams and ethical review processes.
“[This involves] developing cross functional teams to review [and] to ask the question: ‘Are we doing what we said we were going to do or what we set out to do?’,” said Blankson. In doing so, companies can identify and address potential biases and ethical concerns during AI development.
3. Create transparent and explainable AI that users can understand.
“What we have right now is pages and pages and pages, explaining how we write algorithms, where we’re coming up with the ideas, what the decision trees and the decision nodes are,” said Blankson. “But for the average person, what they want to hear is, how is this functioning? Where might there be bias? What am I not seeing? How do I make sure that I am being as ethical as possible?”
This disconnect highlights why it’s important for developers to simplify complex algorithms and present information in a user-friendly manner, ensuring that users have a clear understanding of AI outputs and processes.
4. Be proactive and not reactive about potential problems.
Encourage early identification of potential risks through regular assessments and audits of AI systems. “Don’t wait for the crash to happen. Make sure that you’re beginning the process of thinking about coaching all the users, creating, coaching your coders what to do early on to avoid future problems,” said Blankson.
5. Practice empathy and digital citizenship.
“Building good AI starts with empathizing with the people who are using it, not just making it,” said Blankson. This approach requires cultivating awareness of the ethical considerations surrounding AI use while advocating for transparency and accountability in technology development and deployment.
Applying these insights fosters accountability among creators, users, and regulators, which is essential to ensure that AI systems are designed and implemented responsibly.
“We’re all in it together. So, what we can do to begin to see the deeper narrative, to shape the narrative that we are emerging with in our responsible AI, is so crucial,” said Blankson.
We’ll be back together for Convergence AI 2026 on March 30–31 at the Irving Convention Center. More information and register at www.convergencedallas.ai.