Estimated reading time: 4 minutes
Do ethics fly out the window when automation comes through the door?
I have been delivering training on the use of AI for several years now. After the hype cycle began with ChatGPT entering the business world, businesses have been clamouring to use it in their day to day.
Possibly the greatest challenge is how do we do this safely? Businesses need to protect their data and reputation, while they need to innovate and not feel like they are being left behind. There are also concerns that some language models can and often will hallucinate, leading to inaccurate output.
Bias at source
By their very nature, large language models (LLM’s) are vast data sets scraped from legitimate sources and also from questionable repositories of knowledge. They are trained to remove bias and have ethical guardrails built in, but that still can lead to some tools to leaning one way or the other.
Environmental impact
When I deliver training to businesses on AI, I always go to great lengths in explaining how AI consumes vast amounts of resources. Data centres and supercomputers are hungry for three things: energy, water, and land. I explain that using generative AI to make memes, for memes sake, is not really in the spirit of companies lowering their carbon footprint. I’m not trying to guilt-trip anyone; instead, I’m encouraging people to think first and generate after.
Responsible Artificial Intelligence
Moving through Policy to to Best Practice
There is a growing movement called Responsible AI (RAI) which is designed to help businesses use AI without eroding trust, purpose, or ethical values. RAI is an ongoing commitment for businesses to carry on creating value and build trust by protecting against inherent risks in AI usage. If leaders can govern the use of AI by mapping, measuring, and managing its use in the day-to-day, they stand a much better chance of remaining compliant and current in their thinking.
Key Characteristics of Trustworthy AI
Many organisations are seeking appropriate frameworks. Their intentions are well placed, grounding leaders in demonstrating cautious optimism as we all move into a new era in business administration.
In the US, the National Institute for Standards and Technology produced the Artificial Intelligence Risk Management Framework (AI RMF 1.0). They suggest that risk management in AI sholud ensure the systems being used are valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed.
The current US administration recently released their America’s AI Action Plan, as a follow-up to President Trump’s January 2025 executive order that aimed “to sustain and enhance America’s global AI dominance.” It consists of three pillars: accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security.
The European Commission has established ethical guidelines for trustworthy AI. They have also developed several initiatives, including the AI Contingent Actions Plan, and the AI Office.
The United Nations is also actively engaged in AI Governance. In September 2024, world leaders adopted a Pact for the Future, which included a section on the Global Digital Compact and a Declaration on Future Generations. In January 2025 they created a new UN Office for Digital and Emerging Technologies (ODET) which underscores the need for stronger cooperation in governing digital technologies.
How to implement responsible AI?
- Establish a foundational understanding of the principles of what RAI is and why it matters. I have created a useful NotebookLM knowledge base for you to use in your research.
- Implement a robust internal AI governance framework to outline and standardise the use of AI tools throughout your business.
- Continue reading up on the topic and learning as much as you can. There are plenty of courses available, both free and paid for, with platforms like Coursera.
- Contact Beyond Touch and let us help you work through the safe and ethical use of AI in your organisation.
Whether organisations in your area are seeking business guidance or looking to explore the art of the possible, the team at Beyond is right here to assist you.
Read more articles on Artificial Intelligence for Business here
Footnote: Keeping Up with AI Ethics in 2025
For the latest developments in business ethics and artificial intelligence, see:
- Institute of Business Ethics, “An Ethical Approach to Artificial Intelligence,” March 13, 2025, which outlines current best practices for transparency, accountability, and responsible adoption of AI within organisations.
- Lumenalta, “Responsible AI checklist (updated 2025),” March 6, 2025, which provides actionable steps for bias assessment, data safeguards, and ongoing AI governance.
- Orienteed, “Ethical AI in 2025: Key to Retail, Bias, and Compliance,” October 16, 2025, presenting new compliance pressures and ethical priorities driven by emerging regulation and public trust expectations.
- Forbes, “AI Governance: The CEO’s Ethical Imperative In 2025,” February 3, 2025, highlighting the importance of executive oversight and legal frameworks for responsible AI in business decision-making.

