Most data used to train machine learning models will be synthetic and automatically generated, a new report from Gartner predicts. Only 1% of all AI training data was synthetic in 2021 but analysts suggest it could hit 60% by the end of 2024. Governance and vigilance about biases are essential to prevent this data from suffering the same challenges as organic data, one expert told Tech Monitor.
Synthetic data is generated by AI to fill in missing gaps in real-world information such as medical imaging or information on specific disease patterns. In new research on trends in data science, published this week, Gartner predicts that by 2024 more than 60% of all AI model training data will be synthetic, something it says will lead to better AI systems.
This move from organic to synthetic training data is part of a wider shift towards data-centric AI, such as those used to produce large language and foundation models. “Solutions such as AI-specific data management, synthetic data and data labelling technologies, aim to solve many data challenges, including accessibility, volume, privacy, security, complexity and scope,” Gartner’s report says.
A recent report by GlobalData found that synthetic data start-ups were “redefining the landscape of data generation”. Describing it as the “master key to AI’s future”, Kiran Raj, practice head of disruptive tech at GlobalData, said the start-ups were breaking through the shackles of data quality and regulation. “As the demand for reliable, cost-effective, time-efficient, and privacy-preserving data continues to accelerate, start-ups envision a future powered by synthetic data, ushering a new era of machine learning progress,” Raj said.
It has the potential to have positive impacts across a range of sectors. In healthcare, it is already being used to augment real patient data for training doctors, improving drug discovery and optimising systems. In the financial services sector, it is helping to mitigate risk and detect fraud. And in retail, it is improving demand forecasting, personalised marketing and fraud detection.
AI moving to the edge
The other key trends noted by Gartner include a shift towards edge processing for AI. Processing data at the point of creation will help organisations gain real-time insights and detect new patterns, according to the report. It will also make it easier to meet ever more stringent data privacy requirements. The organisation predicts more than 55% of data analysis by neural networks will occur in an edge system by 2025.
Gartner analysts predict there will be a greater emphasis on responsible AI. This includes ensuring that technology is used as a positive force rather than a threat to society. It includes ensuring businesses make ethical choices when adopting AI that addresses societal value, risk, trust, accountability and transparency. These are the core requirements making up many of the AI regulations being developed around the world, including in the UK.
Organisations should adopt a “risk-proportional approach” to AI investment and deployment, the analysts warned. This includes taking caution when applying solutions and models, and seeking assurances from vendors to ensure they are managing their own risk and compliance obligations. This will help protect them from financial loss and legal action.
Some foundation model and generative AI organisations are offering degrees of indemnity from these risks. Adobe says it will cover costs associated with copyright claims from the use of its Firefly generative AI image model. This is because the company is confident the model is trained solely on licenced and authorised data that won’t produce copyright-suspect output.
Healthcare and disease detection
Peter Krensky, director analyst at Gartner, said: “As machine learning adoption continues to grow rapidly across industries, data is evolving from just focusing on predictive models, towards a more democratised, dynamic, and data-centric discipline. This is now also fuelled by the fervour around generative AI. While potential risks are emerging, so too are the many new capabilities and use cases for data scientists and their organisations.”
Caroline Carruthers, data expert and co-founder of global data consultancy Carruthers and Jackson, told Tech Monitor that synthetic data was an invaluable tool for training AI models, particularly where large datasets weren’t available. “It’s been used most effectively in the healthcare sector, where data on rare diseases has been supplemented by synthetic data to improve modelling of treatment options,” she says.
Carruthers said that while there is “clear value to expanding limited datasets with synthetic data, there are a number of risks”, including the possibility that biases that are prominent in smaller datasets might be amplified by synthetic data using it as a foundation. She adds: “The bottom line is that synthetic data faces the same challenges as organic data when it comes to the need for governance and being vigilant about potential biases.”