Deep Learning for Enterprise Business Applications

The broad theoretical foundations of Deep Learning were formed in the seventies and eighties, and evolved in the nineties and early 2000s. However, it was only in the early part of this decade that Deep Learning started emerging out of the confines of academia and research units, and into the world of enterprise adoption.

The scale of this adoption has increased significantly in the past three years or so. There are several reasons for this – increasing availability of high-end engineering and AI talent, advancements in algorithmic development, evolution and maturity of advanced Neural Networks, development of HPC (High-Performance Computing) and GPU-based systems, development of production-grade frameworks (like TensorFlow and Keras), and others. Deep Learning has become almost synonymous with Artificial Intelligence today, at least in public vocabulary, and sees a lot of investments, focus and efforts globally.

However, most of the attention that Deep Learning receives today is related to consumer applications or use cases that the general public finds interesting, such as autonomous driving, games and facial recognition. A large part of the public (and even corporate) discourse is over-hype, both positive and negative. The application of Deep Learning in enterprise business applications and the real benefits due to that do not get as much attention as they deserve. This paper is an attempt to address that gap.

[siteorigin_widget class=”thinkup_builder_divider”][/siteorigin_widget]

Key Use Cases

1. Automated Data Pre-Processing

Many enterprise business applications, particularly planning, analytics, reporting and big data systems, generally need to pre-process different types of structured and unstructured data. This pre-processing work is often manual and highly time-consuming. This problem has never really received the kind of attention that it should, except for those involved in data engineering, analytics or data science. Some companies did attempt to solve this problem but mainly through software engineering, and the results were limited.

Deep Learning is probably the only technology available today to adequately deal with this. Let us look at two key aspects of this major business challenge.

The first aspect of this challenge is data clean-up, especially when it involves missing values, duplicates, noise (e.g. outliers or random errors), variable distributions, and inconsistent data. The current practices today are to achieve this through a combination of manual efforts, traditional modeling techniques, scripting and semi-automated batch jobs. However, this approach is time-consuming, difficult to scale beyond a point, and has limited accuracy. A hybrid neural network composed of Stacked deNoising Autoencoders and Bi-Directional Recurrent Neural Nets (RNN) can offer a much better solution. Initial model development might take time due to the complex architecture, but once the models are built, they go on to yield tremendous long-term benefits.

The second aspect of this challenge involves data wrangling, blending, integration, transformation and reduction. These processes are often overlapping, but their overall objective is to integrate data from multiple sources and convert them into more suitable forms for data mining and downstream processing. Multiple approaches like data/database modeling, data science algorithms, and engineering-based solutions like ETLs are used in practice today. However, most of these approaches are based on traditional techniques. For example, ETLs have offered a good solution for several decades now, particularly for structured data. But they are non-cognitive in nature, have the limitations of batch jobs, and often add to the complexity of a technical solution.

Deep Learning has the potential to disrupt the current way of looking at this overall problem and addressing it. A good example is building Deep Natural Language Processing (NLP) based models to form a hybrid network of Long-Short-Term-Memory Networks (LSTMs) and Encoder-Decoder Networks that can serve as a gradual but long-term replacement of ETLs.

It is generally observed that when machine learning techniques are used for data pre-processing, downstream processing capabilities are generally more effective and scalable.

2. Intelligent Disaster Recovery

One of the most critical areas where Deep Learning can have a tremendous impact is in Disaster Recovery (DR) Planning and Execution of enterprise business applications.

Most business applications still have old-school DR features today, especially in two key areas:

  • Firstly, there is a general lack of ‘intelligent’ scenario modeling, where the system studies past outages (both specific to a particular company as well as across the industry), builds potential disaster scenarios and recommends suitable options to address each scenario.

Stochastic Simulation techniques, in conjunction with Deep Recommender Engines, can be used to address the same. If there are practical limitations to developing fully Stochastic models (e.g. high computational cost), then a variant of Monte Carlo Simulation or some form of Stochastic-Deterministic Hybrid Simulation can be used.

  • Secondly, though many enterprise applications offer automated switchover features, these are generally rules-based and non-cognitive. The rules of failover to the DR environment (or failback to the primary environment) are set in advance, and executed only when certain specific rules are met. However, production systems are dynamic in nature and non-linear effects need to be considered. The pre-defined rules are not adequate in such scenarios.

Deep Reinforcement Learning models, based on Q Learning or Policy Function, can build agents that dynamically change state in real-time in response to environmental changes at different time steps. These can be used to build self-learning systems that automate the entire failover and failback exercise in a cognitive manner.

3. Automated Application Software Delivery

Continuous application software delivery is a key aspect of modern enterprise applications. This refers to an automated approach to creating new application instances, and pushing regular updates and patches into these instances with minimal downtime. This is delivered by DevOps, Apps IT or Infrastructure Engineering organizations, and are often based on ‘Infrastructure-as-a-code’ practice.

Several proprietary and open-source automation platforms are available today to achieve this continuous delivery in an automated manner. However, these platforms are largely engineering-based with no (or very few) cognitive components. As a result, the extent of automation is only to a certain extent, and the platforms need a lot of human intervention.

Deep Learning greatly helps in such automation work. Deep Stochastic Prescriptive Models, often in conjunction with Deep Reinforcement Models, can automatically assess the need for new server or application instances, check availability of new patches and updates, determine optimal configurations for new deployments, and execute various types of workloads with minimal or zero-downtime. Advanced Machine Learning Optimization techniques like Mixed-Integer Linear Programming and Evolutionary Algorithms provide great results.

4. Cognitive Change Management

Change Management is a key part of all business applications. However, most enterprise applications provide only static, workflow-based change entries, approval and tracking features. This usually involves a user-interface where the proposed change is entered, passed through an approval chain, and deployed after the final approval is done. Deep Learning can be applied to add the much needed ‘cognitive element’ to Change Management.

Change transactions can be determined based on past changes and the current state of the application, and automatically created in the workflow. This can be achieved through Predictive and Prescriptive Models built on Deep Recurrent Neural Networks (RNN). At each level of the workflow process, suggestions to approve (or disapprove) can be made available to the approvers through Natural Language Generation (NLG) and Deep Recommender Models.

Closing Comments

Can we address all the problem areas through Deep Learning today? Of course, not! Some can be addressed completely, some only partly and some not at all. Moreover, Deep Learning solutions involve a lot of R&D and experimentation, advanced capabilities, and results often take time to show.

The critical thing here is to re-frame the current problem statements, evaluate the success of the traditional approaches, leverage advancements in Artificial Intelligence and other technologies, and innovate to create orbit-changing disruptions and large-scale improvements. The initial pace of development is likely to be slow, but as we gain more expertise and our models gain higher maturity, the long-term results are likely to be quite path-breaking.

Share this article.