View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Comment
February 6, 2024

The importance of being earnest about AI regulations

Successful corporate deployments of AI require enterprises to pay close attention to the shifting sands of AI regulations, not only in their domestic jurisdiction but around the world.

By Frank Baalbergen

With the rapid deployment of artificial intelligence (AI) in a business environment, many leaders find themselves at a crossroads between innovation and regulations. From operational efficiency to decision-making, adopting AI poses many benefits for organisations from an optimisation perspective. Even so, adhering to the patchwork of legislation on the subject conceived by myriad regulators and government agencies also presents significant challenges.

As such, companies of all stripes need to pay special attention to the regulatory landscape emerging around AI. From mundane integrations to deployments that transform how the business works at a basic level, CIOs need to invest time and resources in understanding how best to comply with existing guidelines as well as anticipate what might be coming next across multiple legal jurisdictions.

An AI-generated political map of Europe overlaid with microchips, used to illustrate an op-ed about AI regulations.
A CIO of a multinational headquartered in London will have to be more than au fait with emerging regulatory frameworks in Paris, Bern and Berlin. (Photo by Shutterstock)

Understanding global AI regulations and decision-making

Governments around the world have suggested various forms of AI regulations. For example, the EU AI Act, which is in the final stages of being negotiated, can be seen as a more risk-aware approach, particularly around high-risk AI applications, emphasising transparency and accountability.

On the other hand, the UK government has outwardly adopted a more liberal, ‘hands-off’ approach, leaving organisations more room to interpret and implement the UK’s core AI principles. However, given that all companies must align with legislation in areas of operation, it means that even if a company is based, say, in the UK, it must align with EU policies for activities within the EU. Fragmented AI regulations present unique challenges for organisations operating globally, requiring a versatile approach to compliance.

While governmental bodies across the world are hammering out the exact language of their AI laws, the need for transparent and accountable AI-related decision-making becomes even more pronounced. As AI is increasingly employed in critical areas like HR and finance, understanding the implications of its adoption becomes essential – and requires dedicated and intentional training.

Tech and HR leaders must partner to ensure learning and development programmes include tools to upskill employees using AI to optimise decision-making and be able to backtrack on actions and respective decisions if the need arises. It’s essential not only for compliance with upcoming legislation but also for internal due diligence and transparency.

Building ethical AI practices 

Implementing ethical AI practices that promote accountability and transparency can be challenging – not only with shifting regulations but also due to inherently biased data. Therefore, organisations need to establish comprehensive guidelines for transparent and observable AI development and scalable data monitoring systems to adopt AI in an ethical and compliant manner.

Content from our partners
Rethinking cloud: challenging assumptions, learning lessons
DTX Manchester welcomes leading tech talent from across the region and beyond
The hidden complexities of deploying AI in your business

These guidelines must encompass more than just legal compliance. They need to integrate robust data governance protocols. This involves clearly defining the teams’ responsibilities in data collection, use, and management, with an eye towards how regulations, like the GDPR, may influence or define those protocols. Establishing these guidelines is crucial for ensuring that personal and customer data is handled responsibly, safeguarding user privacy, and maintaining the integrity of the data used in AI systems.

Clear principles and data usage training must underpin an organisation’s AI development governance. This training should focus on safeguarding data quality and ensuring AI systems are maintained and operated ethically. By implementing these practices, companies can adopt AI practices that are technically proficient and ethically sound, aligning with regulatory standards and societal expectations.

An AI-generated political map of North America, overlaid with microchips.
There’s a wide-open frontier out there for commercial artificial intelligence deployments – so long as they remain timely, ethical and efficient. (Photo by Shutterstock)

Embracing ethical AI regulations

In the age of AI, low-code platforms emerge as a pivotal tool in helping organisations stay agile, particularly in data security, data quality, and operational cost management. Due to the nature of low code, data is already highly governed, easing the burden for organisations building and implementing AI. The pre-governed environment ensures high-quality, secure, and efficiently managed data, aligning with regulatory requirements.

The democratisation of AI development opens up opportunities for teams in the line of business to contribute and benefit from the practical use of AI. Organisations are able to develop and deploy solutions flexibly at a pace that matches shifting needs. In a low-code platform, one can adapt software through quick, iterative updates — which will help organisations comply with changing regulations and leverage evolving technologies to the best benefit.

As AI regulations continue to roll out, organisations face the dual challenge of navigating complex global regulations and ensuring ethical AI practices. It’s clear that companies must develop a deep understanding of the regulatory environments. Leaders need to commit to implementing real change for organisations through the development of AI safeguarding programmes and ethical practices. This can begin by looking into existing use cases of AI where tools like low-code emerge as a way to support AI adoption in a secure and reliable way. 

Successful adoption of AI needs to lie on the bedrock of leaders who are approaching compliance proactively and pioneering a future where the use of AI and data governance are implemented responsibly, ethically, and transparently.

Read more: Businesses are aware of their cybersecurity weaknesses. Will 2024 be the year they do something about them?

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU