Creating sustainable, trustworthy AI

We heard from Anna Felländer, Founder, AI Sustainability Centre, advises on how organizations can achieve sustainable and trustworthy AI.
We heard from Anna Felländer, Founder, AI Sustainability Center, who advised on how organizations can achieve sustainable and trustworthy AI.

The acceleration of AI in the last nineteen months has raised further ethical and societal risks. Anna Felländer, the founder of AI Sustainability Center, explained at the Data Innovation Summit 2021, how organizations can use sustainable AI to address these issues and avoid costly legal, financial and reputational risks.

Felländer founded the AI Sustainability Center in 2018. She believes that AI will be as commonplace as electricity and will continue to revolutionize all aspects of our lives. However, Felländer notes that AI is not without its risks: privacy invasion, discrimination, lost autonomy, and disinformation are the challenges companies face. In addition to this, AI can harm the safety of humans, creating a violation of human rights. Unfortunately, much of this is out of human governance as it scales quickly and because attention has mainly been focused on the engineering aspect.

Why is it so hard to mitigate societal risks in AI?

To begin with, Felländer explains that algorithms are focused on targets, sales and profits. During this journey, AI makes ethical decisions that will enable it to reach the target hidden from coders. There are no go-to-market solutions for AI, which is why it is so costly and reputationally when AI presents ethical issues. Felländer references Microsoft, who came under fire for their facial recognition software, which failed to recognize black people. The same bias is present in credit loans, where

AI-based automated decisions will authorize a credit to a man but not a woman when they both have the same credit score. Sometimes these risks result from the coder being unable to translate the values of the company into the code. These coders are not taught to have a multidisciplinary skillset to support this kind of coding. Even then, they may be unaware of the context in which the AI is being used, so, unsurprisingly, bias like this is taking place. Asymmetric information makes it difficult to know how to mitigate against this. Still, it is something that all organizations need to consider in implementing their AI strategy, as the ethical and financial risks are far higher than an organization may initially realize.

Developing a methodology for ethical risks in AI

Felländer has spent years developing a methodology to detect, assess and govern ethical risks in AI applications. The AI Sustainability Center approached this from a multidisciplinary perspective, with a legal, technical, and societal lens. Yet, just as importantly, to activate the organization, its framework to detect, assess and mitigate against these risks is visualizing ethical considerations in AI applications that would otherwise have not been considered. These considerations do not belong in a silo or with only the legal or tech team; these are business-critical considerations. Felländer gives examples of the ethical considerations trade-offs in AI solutions:

  • Explainability vs accuracy
  • Fairness vs precision
  • Profits vs values

For example, if an individual had a machine attached to them to measure heart rate, there would need to be a trade-off of precision over fairness. When translating this to AI ethics, organizations need support in navigating these trade-offs. Felländer notes that this is becoming increasingly essential for high-risk organizations who will need to explain their AI procedures

when the EU Artificial Intelligence Regulation comes into effect in 2024.

Organizations are gradually rising to meet this responsibility head-on, with SalesForce being one of the first US organizations to hire an ethical AI officer. Organizations just moving to explainable AI need to ask: What needs to be explained? What can be explained? To whom do we need to explain it to? Companies will now need to scan their vast amount of AI and ML to see what will be explained to the EU.

READ MORE:
Ethical AI screening

Felländer says that at the AI Sustainability Center, they have created ethical AI filters. She cites the risks ignored until now as a “dark cloud of pollution” that companies such as the AI Sustainability Center seek to mitigate. She believes that the EU’s regulation will drive it as a region in creating transparency, also giving these organizations a competitive edge. The AI Sustainability Center has created an insight engine for ethical AI governance, with an ethical AI profiler that screens AI applications and businesses against ethical and societal risks. In addition to this, it has developed solutions that can predict risks and recommend mitigation tools. Felländer says that the AI Sustainability Center supports scaleups, major corporations and recruitment companies. She concludes by saying that all organizations should embrace AI and ethical AI and encourages organizations to educate themselves on the ethics of AI.

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

Amber Donovan-Stevens

Amber is a Content Editor at Top Business Tech

TPIs are the Future of Energy Solutions

David Sheldrake SVP POWWR • 19th June 2025

The energy industry is undergoing a transformation, and Third-Party Intermediaries (TPIs), those brokers and consultants who help businesses procure energy, are at the centre of it. With growing complexity, increasing regulation, and evolving customer expectations, the role of TPIs is shifting from price-focused brokers to strategic energy advisors. While renewable energy adoption continues to reshape...

Quick Commerce and the Retail Media Revolution

Sue Azari • 11th June 2025

Quick commerce has transformed the way consumers shop, redefining convenience with near-instant delivery of groceries, meals, and household essentials. However, beyond its impact on logistics and e-commerce, quick commerce is now emerging as a major force in digital advertising. As consumer behaviours shift toward on-demand purchases, these platforms are leveraging their vast first-party data and...

Is It Time for a VMware Alternative?

Wind River • 22nd May 2025

Companies have options when it comes to replacing VMware as their cloud platform, to address rising costs, support concerns, and a shrinking partner ecosystem. If you are ready to contemplate a different vendor, here are five reasons why Wind River Cloud Platform should be on your short list of VMware alternatives.

AI Leads as VivaTech Unveils Top 100 Startups

Viva Technology • 14th May 2025

Viva Technology has unveiled the first edition of its “Top 100 Rising European Startups for 2025,” spotlighting the most promising young companies shaping Europe’s tech future. Germany, France, and the UK lead the ranking, which highlights high-growth startups across 13 countries. Artificial intelligence dominates the list, with 15 companies spanning AI agents, models, and infrastructure....

Birmingham Unveils the UK’s Best Emerging HealthTech Advances

Kosta Mavroulakis • 03rd April 2025

The National HealthTech Series hosted its latest event in Birmingham this month, showcasing innovative startups driving advanced health technology, including AI-assisted diagnostics, wearable devices and revolutionary educational tools for healthcare professionals. Health stakeholders drawn from the NHS, universities, industry and front-line patient care met with new and emerging businesses to define the future trajectory of...

Why DEIB is Imperative to Tech’s Future

Hadas Almog from AppsFlyer • 17th March 2025

We’ve been seeing Diversity, Equity, Inclusion, and Belonging (DEIB) initiatives being cut time and time again throughout the tech industry. DEIB dedicated roles have been eliminated, employee resource groups have lost funding, and initiatives once considered crucial have been deprioritised in favour of “more immediate business needs.” The justification for these cuts is often the...