Three business professionals discussing data and charts with the heading: Are ethics at risk in the age of business AI?

Are Ethics at Risk in the Age of Business AI?

GraemeAI Adoption, AI Ethics, Artificial Intelligence, Business Compliance, Business Ethics, Business improvement, Business Innovation, Business Strategy, Digital Transformation, Ethical AI Implementation, Leadership, Operations, Performance, Responsible AI Governance, Strategy, Technology Governance, Training

Estimated reading time: 4 minutes

I have been delivering training on the use of AI for several years now. After the hype cycle began with ChatGPT entering the business world, businesses have been clamouring to use it in their day to day.

Possibly the greatest challenge is how do we do this safely? Businesses need to protect their data and reputation, while they need to innovate and not feel like they are being left behind. There are also concerns that some language models can and often will hallucinate, leading to inaccurate output.

By their very nature, large language models (LLM’s) are vast data sets scraped from legitimate sources and also from questionable repositories of knowledge. They are trained to remove bias and have ethical guardrails built in, but that still can lead to some tools to leaning one way or the other.

When I deliver training to businesses on AI, I always go to great lengths in explaining how AI consumes vast amounts of resources. Data centres and supercomputers are hungry for three things: energy, water, and land. I explain that using generative AI to make memes, for memes sake, is not really in the spirit of companies lowering their carbon footprint. I’m not trying to guilt-trip anyone; instead, I’m encouraging people to think first and generate after.

There is a growing movement called Responsible AI (RAI) which is designed to help businesses use AI without eroding trust, purpose, or ethical values. RAI is an ongoing commitment for businesses to carry on creating value and build trust by protecting against inherent risks in AI usage. If leaders can govern the use of AI by mapping, measuring, and managing its use in the day-to-day, they stand a much better chance of remaining compliant and current in their thinking.

Many organisations are seeking appropriate frameworks. Their intentions are well placed, grounding leaders in demonstrating cautious optimism as we all move into a new era in business administration.

In the US, the National Institute for Standards and Technology produced the Artificial Intelligence Risk Management Framework (AI RMF 1.0). They suggest that risk management in AI sholud ensure the systems being used are valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed.

The current US administration recently released their America’s AI Action Plan, as a follow-up to President Trump’s January 2025 executive order that aimed “to sustain and enhance America’s global AI dominance.” It consists of three pillars: accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security.

The European Commission has established ethical guidelines for trustworthy AI. They have also developed several initiatives, including the AI Contingent Actions Plan, and the AI Office.

The United Nations is also actively engaged in AI Governance. In September 2024, world leaders adopted a Pact for the Future, which included a section on the Global Digital Compact and a Declaration on Future Generations. In January 2025 they created a new UN Office for Digital and Emerging Technologies (ODET) which underscores the need for stronger cooperation in governing digital technologies.

  1. Establish a foundational understanding of the principles of what RAI is and why it matters. I have created a useful NotebookLM knowledge base for you to use in your research.
  2. Implement a robust internal AI governance framework to outline and standardise the use of AI tools throughout your business.
  3. Continue reading up on the topic and learning as much as you can. There are plenty of courses available, both free and paid for, with platforms like Coursera.
  4. Contact Beyond Touch and let us help you work through the safe and ethical use of AI in your organisation.

Whether organisations in your area are seeking business guidance or looking to explore the art of the possible, the team at Beyond is right here to assist you.

Read more articles on Artificial Intelligence for Business here

AI AdoptionAI EthicsArtificial IntelligenceBusiness ComplianceBusiness EthicsBusiness improvementBusiness InnovationBusiness StrategyDigital TransformationEthical AI ImplementationLeadershipOperationsPerformanceResponsible AI GovernanceStrategyTechnology GovernanceTraining