View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 19, 2018updated 20 Sep 2018 10:53am

IBM Releases “Black Box” Breaker on IBM Cloud

Company has also open-sourced an AI bias detection toolkit on Github

By CBR Staff Writer

IBM has released and open-sourced an AI bias detection and mitigation toolkit on code repository GitHub. The move comes as it also released a new IBM Cloud-based software service, which aims to make the “black box” of third-party algorithms transparent so that organisations can manage AI systems from a wide variety of industry players.

The AI Fairness 360 toolkit is an open-source library that provides developers with the means to test AI models for biases, while also providing algorithms to mitigate any issues discovered inside models and datasets.

Alongside the open-sourcing of that toolkit, IBM said it has introduced “new trust and transparency capabilities” on the IBM Cloud that work with models built from a wide variety of machine learning frameworks and AI-build environments such as Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

This means organizations can take advantage of these new controls for most of the popular AI frameworks used by enterprises, further identifying AI biases that could skew results in algorithms being used by enterprises.

Release Comes Amid Concerns about Data Bias in “Black Box” Models 

Writing in the California Law Review Solon Barocas & Andrew D. Selbst noted of the issue of datasets used in AI training models: “Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large.”

The AI Fairness 360 toolkit tries to address these issue by letting developers access a comprehensive set of metrics for models and datasets that allows them to test their own AI research for inherent biases. Included in the toolkit are the capabilities for the software to inform the user what is biased and why it has been highlighted as issue.

See Also: IBM and Moller-Maersk Aim to Transform Global Supply Chain With Blocks

David Kenny SVP of Cognitive Solutions at IBM said in a release shared Wednesday: “IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice. We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

ai bias

As an illustration of the issues that can result from algorithmic bias, open-source development platform Project Jupyter outlines how a machine learning model can be biased when trying to predict loan repayment outcomes: “Loan repay model may determine that age plays a significant role in the prediction of repayment because the training dataset happened to have better repayment for one age group than for another.”

“This raises two problems: 1) the training dataset may not be representative of the true population of people of all age groups, and 2) even if it is representative, it is illegal to base any decision on a applicant’s age, regardless of whether this is a good prediction based on historical data.”

AI Bias

The IBM Cloud service, meanwhile, ai biaswhich is fully automated, will give enterprises explanations in digestible terms that show what factors were considered in the decision to highlight biases, while also showing what level of confidence it has in the judgement.

All this is displayed in a visualised dashboard on the IBM  cloud.

IBM state in the announcement that : “The records of the model’s accuracy, performance and fairness, and the lineage of the AI systems, are easily traced and recalled for customer service, regulatory or compliance reasons – such as GDPR compliance.”

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.