By Imène Brigui
As a consequence of Hyperconnectivity, tremendous growth of iOT and mobile technology expansion, data is becoming highly accessible to companies : large, multiform, multi-channel and available more and more quickly.
Paradoxically, this explosion of data, supporting decision making at multiple levels, also adds complexity to it. Managers find themselves in an environment full of uncertainty and requiring immediacy, flawless situational intelligence and highly flexible systems.
To handle this complex environment, companies need to push the boundaries of information systems by integrating highly cognitive capacities. More than ever, the cognitive is crucial nowadays to create value. Reasoning on data allows us to capitalize on a rare and vital resource for companies: knowledge! If we consider that data only supports decision making by reacting to stimuli and responding to predefined patterns, it clearly won’t be capable to support learning and building intelligence to make sense.
A company is clearly an intellectual place: we think, interpret, imagine, structure and therefore reason. We don’t just react. Managers must therefore have the ability to learn and unlearn quickly, and so must systems! More than ever, we need intelligent systems able to make sense with and for decision makers.
In order to provide systems with cognitive capacities, Artificial Intelligence represents a major asset. This contribution is neither binary nor uniform. To grasp it, four dimensions have to be explored: perception, learning, abstraction and reasoning.
What do the systems perceive?
This includes the perimeter of the environment in which the system is conscious to exist. This includes not only the volume of data that the system collects, but also the degree of accuracy of that data and its quality. A system that continuously perceives stimuli can allow for greater proactivity and adaptability. This system will be all the more intelligent if it is able to question its context and thus push the boundaries of its perception in an autonomous and dynamic way.
Are they capable of learning?
This represents the capacity of a system to learn from the situations in which it has been used. Learning is fundamental for the system to capitalize on what has happened in the past in order to be more and more efficient in predicting and thus anticipating what is likely to happen in the future. Several types of learning can be distinguished, including statistical learning, which is guided by large volumes of data.
Are they capable of abstraction?
This includes the system’s ability to deduce new concepts and to aggregate concepts to which it has not been explicitly programmed. In contrast to the more “classical” cases where the system applies rules or instructions to facts it observes, abstraction is the process of observing facts and abstracting new rules/approaches/thinking patterns from them.
How do they reason?
This represents the degree of sophistication of the cognitive approaches used as well as the complexity of the concepts manipulated. Does the system reason on exclusively structured data? Semi-structured? Does it manipulate knowledge and logical rules? Is it based on strict or fuzzy concepts? Is it able to specify what information it lacks to perfectly structure a prediction or prescription?
To make sense, we need to ask 2 basic questions. The first is: Is that meaningful ? easy to understand ? And the second is : Is that reasonable ? logical ?
It is all about meaning and logic. The implementation and appropriation of AI techniques in companies today would require concrete actions about meaningfulness and Logic on both AI paradigms as well as their implementation.
Explain / Justify
Confidence comes from understanding how algorithms work. Managers need to get the reasoning logic of the systems they use to accept them with consciousness and serenity.
Let’s take the example of Machine Learning and Deep Learning techniques, famous for their performance in prediction, classification in many successful applications: pattern recognition, medical diagnosis, real-time translation, etc. They particularly suffer from this lack of explainability. Users receive recommendations and/or predictions but do not understand the logical path by which the system got to this solution.
A certain lack of trust has been observed for last years on many AI techniques, especially on the respect of private data, black box processing and biases of all kinds.
Indeed, for a serene adoption of these techniques, it is important to be aware of these shortcomings and to ensure greater transparency on both perception (the data captured) and reasoning (the why and how).
The ubiquity of AI in our professional or personal, public or private lives adds to the fear that things are getting out of hand. A fear that is intensified by the lack of knowledge about AI techniques. A first step would be to train and understand these paradigms without going necessarily to the purely technical aspects. Simplifying access to knowledge in this disciplinary field is becoming a necessity. Hence, we need to control/monitor AI, from one side and we need to train/sensitize humans from the other side.
Ability to respond to a problem and not to a trend
Adopting AI is not just about overlaying algorithms to solve more or less wellformulated problems. Indeed, the resolution of a problem inevitably requires a precise and enlightened definition of the problem itself.
We cannot claim to be able to solve all types of problems in a company (and they are many and various) by relying on the same miracle techniques. A deep work should be done to define the actual needs and objectives, to assess uncertainties, to audit available data resources. That is crucial to select the appropriate technique(s) to be deployed.
Putting human intelligence at the heart of systems
In addition to the training and awareness already mentioned, human intelligence must be put back at the heart of algorithms by ensuring a constant AI/Human co-construction. This requires a broadening of perception in order to better assimilate and capture human knowledge and expertise.
In addition, humans are able to reinforce learning qualities through explicit feedback that would bring logical rigor to classical learning techniques and support, therefore, the cognitive capacities through the human dimension. This would allow the development of situational intelligence in which human inputs are fundamental.
Hybridization creates value
AI will be resolutely hybrid. Hybrid thanks to the integration of paradigms that have been mutually exclusive till now. Many researchers suggest that a major challenge would be to think of bridges between systems that reason on rules and logic in rigorous way and statistical learning systems. Bridges and integrating logics must also be thought of between human intelligence and artificial intelligence. Thus, the challenge is to make different visions of artificial reasoning and human reasoning cohabit and interact to make sense.
About the Author
Imène Brigui, associate professor and researcher at emlyon business school, holds a PhD in Computer Science from Paris Dauphine University. Her research areas focus on Artificial Intelligence and in particular Intelligent Agents. During more than 15 years’ experience in research and teaching, she has been involved in several research projects and pedagogical responsibilities. She is also engaged in multiple AI and Data communities in France and abroad.