Building AI Applications – It’s More than Sticking AI in Front of It

Recent advances in machine learning and artificial intelligence (AI) — as well as technological improvements — have enabled a wide(r) range of businesses to integrate AI into their offering or integrate AI in their business processes.

As with any new technology, a number of established companies and start-ups tried the approach of “we’ll just stick AI in front of it". Some succeeded, either with a sustainable product or convincing a VC or other funding bodies to back their idea. One example of “just adding some smart technology” reached some fame on the internet recently: Juicero, a Silicon Valley-based start-up that received $120 million to develop and sell a press that would turn pre-packed fresh fruit and vegetables into juice. From a nutritional perspective, this approach is already dubious (eating the fruit and vegetables adds the benefit of fibre), but the approach does cater to a current lifestyle trend. Consumers, however, first need to shell out $400 for the device and lock themselves into a subscription model to receive new bags of ingredients. The press itself is then connected to the internet, but, as Bloomberg reported recently, the machine doesn’t do anything you couldn’t do with your own hands — and in about the same time. Convenience is certainly a factor, especially for cafes and restaurants, but the commercial product is significantly more expensive than the consumer one.

Just trying to be innovative by integrating some sort of “smart technology” doesn’t address a real need consumers or enterprises have. AI isn’t a magic silver bullet that solves - well, what? The key question is what to use AI for. A recent comic “here to help” at xkcd.com illustrates the point rather well. Using AI, machine learning or other algorithmic approaches can help solve a problem that cannot be overcome otherwise, but it requires a detailed understanding of the overall context and how the application will either address an existing need or enable novel ideas or strategies.

Building an AI-based application needs to cover the following points:

  • Validated use- and business case — Unless it’s part of a dedicated R&D effort, an interesting use case does not necessarily qualify for a go-ahead if there is no existing or future market or customers which would benefit from this application. This does not mean that one should be overly cautious exploring new opportunities but there should be a clear potential for the product or service. Reid Hoffman, the founder of LinkedIn, famously said “If you are not embarrassed by the first version of your product, you’ve launched too late.” However, the first product should not be an unusable mess without benefit but rather a “minimum viable product” that offers a clear advantage and can quickly be improved.
  • What is the measurable or tangible impact of the application? How is “success” defined?
  • Are sufficient data available and accessible that can be used to develop and train AI systems? Are the data of sufficient quality?
  • Is there a good understanding of the specific use case, domain or vertical?
  • Has something like this done before? If yes, how is this application different?
  • What is the USP – the unique selling point no other application can deliver?
  • AI and engineering expertise, operational excellence: AI based applications aren’t written and operated by single unicorns but are shaped by intense teamwork of excellent specialists complementing each other (see my previous blog)
  • How do the end-users of the system interact with the application? How will their life or job change?
  • For enterprise applications: How are decisions handled at the moment and how will this change once the AI based application is integrated in the daily business?

Admittedly, the above is a long list of points that need to be considered when building an AI powered application. But taking any shortcuts may fuel a short term hype and will not lay the groundwork for a lasting and measurable impact on business performance.

Ulrich Kerzel earned his PhD under Professor Dr Feindt at the US Fermi National Laboratory and at that time made a considerable contribution to core technology of NeuroBayes. After his PhD, he went to the University of Cambridge, were he was a Senior Research Fellow at Magdelene College. His research work focused on complex statistical analyses to understand the origin of matter and antimatter using data from the LHCb experiment at the Large Hadron Collider at CERN, the world’s biggest research institute for particle physics. He continued this work as a Research Fellow at CERN before he came to Blue Yonder as a senior data scientist. Ulrich Kerzel earned his PhD under Professor Dr Feindt at the US Fermi National Laboratory and at that time made a considerable contribution to core technology of NeuroBayes. After his PhD, he went to the University of Cambridge, where he was a Senior Research Fellow at Magdelene College. His research work focused on complex statistical analyses to understand the origin of matter and antimatter using data from the LHCb experiment at the Large Hadron Collider at CERN, the world’s biggest research institute for particle physics. He continued this work as a Research Fellow at CERN before he came to Blue Yonder as a Principal Data Scientist.

 

Dr. Ulrich Kerzel Dr. Ulrich Kerzel

earned his PhD under Professor Dr Feindt at the US Fermi National Laboratory and at that time made a considerable contribution to core technology of NeuroBayes. He continued this work as a Research Fellow at CERN before he came to Blue Yonder as a Principal Data Scientist.

Video

At Blue Yonder we are serious about data protection and the rights of our website visitors. By signing up to our newsletter you are agreeing to us processing your data, in order for us to send you the content you have requested. We process your data to improve and personalize our visitors' online experiences, and based on legitimate interests. To learn more about how and why we do so, see our privacy policy.