View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Data
August 2, 2017

Five questions every business must ask itself before starting a machine learning project

To avoid such costly mistakes, it is essential to ask these simple but important questions before beginning each machine learning project.

By James Nunns

Enterprises across all industries are now eager to experiment with machine learning and artificial intelligence.

Emeli Dral, Chief Data Scientist, Yandex Data Factory.

They build in-house data science departments, or invite external team to help them solve real business issues through the smart use of data. Yet, while the projected commercial impact of these efforts is discussed widely, the practical challenges of applying these technologies to real business can be often overlooked.

Rather than reflecting machine learning’s unsuitability, this instead boils down to naivety. Too many businesses fail to learn from the mistakes of others – both beginners and experts – before implementing their machine learning projects.

Based on experience, here are the top five questions every business must ask themselves when embarking on a machine learning project.


Does your problem statement meet the initial business goal?

At the very start of a machine learning project, businesses often have a clear idea about what they want to achieve, or at least think they do. Some businesses even know which machine learning techniques they want to use to meet this goal, and quickly propose the possible tasks. In some instances, such a strong vision can help to accelerate the project outcome. But in others, it can lead to the distortion of the initial business goal, setting the machine learning model to achieve results that don’t satisfy the original expectations.

Let’s say a retailer knows that it wants to increase revenue by developing a product recommender system. The logical assumption is that by crunching the past data about shopper behaviour it can make personalised product recommendations, thus increasing the total number of purchases. The issue with this is that the optimisation of number of purchases doesn’t automatically lead to the optimisation of sales volume. And if you task the algorithm to do exactly that, it may start, for example, frequently suggesting low value products to the customers.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Such recommendations are likely to be accepted, and the system will perform its purpose of increasing the average number of items in the cart, but in the end lead to the drop in revenue. The trick is in the correct choice of initial problem statement. Optimisation of sales volume would in fact require a different recommender system, with this metric incorporated as the target goal. It may end up, for example, encouraging customers to place very occasional high value orders than to focus on many lower value ones, leading to the revenue boost.

Businesses must remain focused on the initial business goal and allow this to guide both the problem statement and model. Only then, will a machine learning project achieve the intended results.


Does the model building process fully account for the way it will be used?

The way the machine learning model is trained and tested should be defined very carefully to ensure that it will reliably work in production. Unfortunately, some businesses fail to notice that the testing process doesn’t account for any of the real-life anomalies that the model can encounter while in production.

If a bank wants to optimise its ATM cash replenishment it must know the precise cash demand. To do this, the bank may want to build a forecasting model that can first be tested on historical data before putting it into production. However, if the test data isn’t chosen carefully, the model quality proven high enough during the test may dramatically decline while in production. This is because testing a model on the historical data from, let’s say, August, won’t take into account dramatic changes of behaviour during Christmas holidays. Takeaway: if you plan to use the model during “non-stationary” days, it should be tested on a period that includes such real-life abnormalities as December vacations.

Incorporating the calendar holidays in your model looks easy enough, but what if your project is based on reacting to less predictable and more variable future events, such as weather?

Many businesses consider the impacts of weather, among other data sources, as it can affect customer decisions – such as whether to buy ice-cream, or stay at home playing video games. During the development of machine learning model, historical weather data can be used to teach the algorithm to responds to its changes in the appropriate way.

However, when the model is in production, this becomes an issue as reliable weather data is not available ahead of time. While weather forecasts are the next best thing, they are inappropriate for models trained on actual historical data – the latter is definite, while the former brings with it a degree of risk or probability, which will not have been factored in. The business must therefore incorporate uncertainty into the machine learning algorithm even at the training stage, so that the model can treats the weather datasets equally.


Are you accurately designing the experiments?

A/B testing is one of the most reliable ways to validate the effectiveness of a machine learning project and prove its value to the business. Sadly, too many businesses make the mistake of carrying out tests with unequal data splits which obscure the model’s impact or faults. In A/B testing, the data should be split equally between a test split to trial the new model’s performance, and a control split to trial the new model’s results against the old one. This is the only way to be sure that impacts and effects are the result of the new model, not any variations in the data.

For example, a manufacturer may want to test a new model that will recommend optimal process parameters for steel production, such as temperature or the quantity of chemical ingredients. However, a manufacturer may be inclined to use expensive types of steel for the control split and a cheaper type for the test split to reduce potential economic loss in case the experiment goes bad. Sadly, this makes it impossible to measure the quality of the recommender system accurately. It is a basic requirement of experimentation that to make any fair estimation of the effect of the change – all the remaining test conditions must stay the same.

Read more: Machine learning and robotics to take 30% of bank jobs


Does your model fit the existing business process?

In addition to correctly defining the problem statement, approaching the model development, and experiment design, it is essential to consider how the model will fit within the existing business process.

For instance, a digital TV provider may want to fight customer churn by developing a churn prediction model. Assuming the business sells yearly subscriptions, it can list the customers whose subscriptions are about to expire each month. Clearly these customers are the best targets for any retention efforts – so where does the prediction model fit?

If this is a long list of customers, and customer service operations are small, the business may need a churn prediction model to detect those most at risk so that they can prioritise outreach. However, if the company is big enough to contact each customer individually, and willing to invest the resources, a churn prediction model is unlikely to be the most effective addition to the existing process – it may not even be necessary.

Before investing in a machine learning project, it is important that businesses have a clear understanding of how the model will be used. This will help them to avoid investing heavily in immaterial and therefore unachievable projects.


Are you making more money than you are spending?

An early assessment of the potential economic effect is a must for every machine learning project. To do this, the cost of model development should be carefully contrasted with expected gains – the choice of use case must be guided by its economic potential and the cost of implementation.

Knowing when to stop improving a once profitable model is also an important lesson to learn. Over time, it becomes harder to make continuous improvements and every percent of quality improvement begins to cost more than the previous one. To tackle this, businesses should regularly measure how much is gained from each improvement versus how much it is costing them. At the point at which further improvement starts to cost more than the value it brings, businesses should stop investing in this process.

Machine learning’s potential is already established in many industries and for many use cases. At the same time, many individual projects continue fail within businesses. To avoid such costly mistakes, it is essential to ask these simple but important questions before beginning each machine learning project.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.