Article

How to Get More ROI—Faster—From Machine Learning

Find out how to harness machine learning and AI to contain costs, increase revenue, and grow your organization’s customer base. Read more.

July 12, 2021 3 min read
Getting More ROI from Machine Learning

In 2016, Gartner’s Hype Cycle rated machine learning and AI “the most disruptive class of technologies.” Companies were quick to incorporate the promising capabilities into their advanced analytics efforts.
 
But the technologies haven’t delivered real value for most organizations.
 
Senior executives tell McKinsey they’re “eking out small gains from a few use cases” and “failing to embed analytics into all areas of [their] organizations.” And Gartner has since estimated that over 80% of machine learning projects fail to reach production
 
Why have machine learning and AI failed to deliver?
 
Two barriers holding back your analytics
 
The traditional approach to analytics—per-application custom data feeds, multiple data copies, and nine months or more for implementation cycles for a single model—doesn’t work for analytics at scale.
 
To succeed at scale, enterprise analytics programs need to overcome the two largest barriers to ROI: scale of analytics and scale of data.
 
Barrier #1: Scale of analytics
 
Machine learning algorithms perform best when given tasks that are discrete and specific. As the size of the problem space becomes larger and more complex, single models fail to perform. 
 
Case in point: A child’s scooter, a wheelchair, and a city bus all have wheels, but each operates very differently. A self-driving car needs to understand the differences between these “vehicles”—while also knowing how to detect and behave at red lights, stop signs, and other traffic signs. A single model approach can’t manage that complexity at scale.   
 
The solution? Break the problem space into small units and deploy machine learning at the lowest level possible. Future-ready businesses will need hundreds of thousands—to millions—of algorithms working together. Hyper-segmentation—where a single algorithm is trained against each customer’s experience and data, instead of a single algorithm trained against all customers—will be necessary in some cases.
 
Barrier #2: Scale of data 
 
Why does Google always—and eerily—know what you’re about to ask it? 
 
Data.
 
Google has amassed trillions of observations from billions of daily searches—plus millions of data points from interactions with individual users. For machine learning to perform well, enterprises must use all their data. That includes data from across the enterprise, across products, across channels, and from third-party data vendors. 
 
However, we don’t just need more data. We need more data in context—cataloguing it by party or organization, by network, by time, and with geospatial and biometric overlays. Pennies, Pounds, Euros, and Rupees are all names for money, but we don’t want machine learning to try and understand each currency. We want it to understand credit risk, predict probability of a large loss, or determine optimal inventory levels.
 
Data in context also needs to be relatively clean, so it can be well understood by all users—including the analytics algorithms, end users, auditors, senior executives—or worst-case scenario, opposing counsel.
 
Building reuse, flexibility, and ROI into analytics at scale
 
Consider this:
 

  • Data processing accounts for 80% of any given project’s time expenditure
  • Close to 65% of the processed data can be shared, even in remotely similar use cases 
  • Leveraging this data can save organizations hundreds of thousands of hours 

 
But traditional solutions require a time-intensive process of copying and moving data to each application.
 
That’s why scaling analytics isn’t just a matter of investing more money in analytics but investing in the right data management design. To be future-ready, organizations require a connected multi-cloud data platform to cut through complexity and deliver useful, actionable answers to any problem.
 
Find out how to harness machine learning and AI to contain costs, increase revenue, and grow your organization’s customer base. Sign up for our Analytics 1-2-3 webinar.

Tags

About Chris Hillman

Chris Hillman is the Senior Director, AI/ML in the International region and has been responsible for developing and articulating the Teradata Analytics 1-2-3 strategy and supporting the direction and development of ClearScape Analytics. Prior to this current role, Chris led the International Data Science Practice and has worked on a large number of AI projects in the International Region focusing on the generation of measurable ROI from Analytics in production at scale using Teradata, open source and other vendor technologies. Chris has spoken regularly at leading conferences including Strata, Gartner Analytics, O’Reilly AI and Hadoop World. Chris also worked to establish the Art of Analytics practice, promoting the value of producing striking visualisations that draw people into Data Science projects, while retaining a solid business-outcome foundation.

View all posts by Chris Hillman

Stay in the know

Subscribe to get weekly insights delivered to your inbox.



I consent that Teradata Corporation, as provider of this website, may occasionally send me Teradata Marketing Communications emails with information regarding products, data analytics, and event and webinar invitations. I understand that I may unsubscribe at any time by following the unsubscribe link at the bottom of any email I receive.

Your privacy is important. Your personal information will be collected, stored, and processed in accordance with the Teradata Global Privacy Statement.