Limiting bias and inexperience in AI-powered factories of the future

Published in techtarget.

The United Nations Sustainable Development Goals eight and nine are important in the context of Industry 4.0 and industrial IoT. SDG-8 calls for decent work and economic growth, while SDG-9 calls for innovation in industry and infrastructure. The purpose of the SDGs is to improve social conditions and advance humanity. AI plays a critical role in accomplishing this. For instance, let’s look at the innovation that’s happening in the Industry 4.0 space and where AI systems are proving efficient in preventing human errors and improving efficiency. The case studies from early AI systems clearly demonstrate that AI can not only improve efficiency metrics, like yield and throughput, but it can also reduce material waste and harmful emissions. In these scenarios, AI will create a net gain for us as society, improving human conditions.

Full article here

Data is not equal to knowledge

Published in Manufacturing.net. Full article here

A common pitfall a lot of machine learning (ML) companies run into is mistaking data as knowledge. Several enterprises think that having a lot of data makes them ripe for harvesting insights instantly through AI and ML techniques. It is not entirely true.

Data is not equal to knowledge, or more precisely, not the knowledge you think it equals.

Ernesto Miguel, 47 is a plant operator in a leading cement company. He has spent the last three decades working in the same cement plant. He knows each and every machine in his cement plant intimately. From the sound they make, he can tell what can be wrong. He is a champion in ensuring that the machines operate at their highest efficiency.

Full article here

Limiting bias and inexperience in the AI-powered factories of the future

This article originally appeared on techtarget as an invited guest article.

The United Nations Sustainable Development Goals eight and nine are important in the context of Industry 4.0 and industrial IoT. SDG-8 calls for decent work and economic growth, while SDG-9 calls for innovation in industry and infrastructure. The purpose of the SDGs is to improve social conditions and advance humanity. AI plays a critical role in accomplishing this. For instance, let’s look at the innovation that’s happening in the Industry 4.0 space and where AI systems are proving efficient in preventing human errors and improving efficiency. The case studies from early AI systems clearly demonstrate that AI can not only improve efficiency metrics, like yield and throughput, but it can also reduce material waste and harmful emissions. In these scenarios, AI will create a net gain for us as society, improving human conditions.

AI can transform humanity by giving time back to humans to focus on more productive tasks. There are new skills to be learned and it is clear that specific types of work will be displaced by new ones. For the sake of this article, let’s assume that we are able to empower our current factory workers with new skills that make them relevant and productive in the age of AI. If we do that, are we all set? Is that the only societal challenge we have for realizing the potential of AI completely?

In an ideal world, there are AI systems working seamlessly with humans to create factories of future that are lean, efficient and environmentally friendly. But we are far from that ideal world for two reasons: the current infrastructure present in industrial setting to collect and provide accurate data, and algorithmic biases.

There are different ways of architecting AI systems. The most common way is to model the behavior of the world through data and make decisions based on the realized model of the world. As you can see, this is problematic. What if the data is not accurate? What if we don’t have enough data? What if our data only partially captures the world we want to model?

With the last surge of industrial IoT revolution, there was a surge of dataavailable in factories. This opens the door to applying AI to factory operations. The challenge, however, is that the data is not ideal in several ways. Data collection processes were never optimized for a future AI application, rather they were built for simple responsive actions and decision-making. This shows up when the data is used to create machine learning models for building smart automation or predictive maintenance tools. Some problems with data can include incorrect sample rate, compressed or lossy data, incorrect sensor readings through faulty sensors or mechanical degradation, and so forth.

Algorithmic bias in AI, simply put, is a phenomenon where an AI deployment has a systematic error causing it to draw improper conclusions. This systematic error can creep in either because the data used to model and train the AI system is faulty, or because the engineers who created the algorithms had an incomplete or biased understanding of the world.

There have been several articles published about the human bias contributing to biased AI systems. There is well-documented evidence of AI systems showing biases in terms of political preferences, racial profiling and gender discrimination. However, in the context of Industry 4.0 applications, they are as big of a problem as data bias.

Going back to the SDG goals discussed above, we should aspire to improve the human conditions by providing people meaningful work. Let’s take an example of Ernesto Miguel, who has worked at a cement factory as a plant operator for the last 30 years. Ernesto spends most of his time ensuring the equipment under his watch functions efficiently. Over the last three decades, he has formed an intimate bond with the machines in his factory. He developed extraordinary abilities to predict what might be wrong with a machine by hearing the sound it makes. He can do more, like training more workers to be intuitive like him. He wants to share his expertise, but unfortunately Ernesto spends most of his time reacting to equipment problems and preventing failures. This is a problem ripe for AI.

We deployed one of our AI systems to model a crucial piece of plant equipment — a cooler — in a cement factory. The idea was to learn how adequately we could model equipment behavior by looking at two years’ worth of time series data. The data provided a great deal of insight into how the cooler was operating. Using the data, our engineers were able to identify correlation between different inputs to the equipment and its corresponding operating conditions.

If this worked flawlessly, we would accomplish two goals: use smart AI systems that could keep the equipment functioning in an optimum way and allow Ernesto to focus on more meaningful work, such as effectively training other factory workers.

Bias creeps in inadvertently when AI system designers confuse data with knowledge.

It was a big moment when the first AI system was deployed in the cement plant. We don’t yet live in a world where we can trust machines completely, and for good reason. So, there was a safety switch included for the plant operator to intervene if something went wrong. The first exercise was to run the software overnight, where the AI system monitored the cooler and was responsible for keeping it within safe bounds. To the delight of everyone, the system successfully ran overnight. But that joy was short-lived when the first weaknesses in the model started appearing.

The cooler temperature was increasing. And the model with an established correlation between the temperature and fan speed kept increasing the fan speed. In the meantime, the back grate pressure rose above the safe value. But the model identified no correlation between the back grate pressure and the temperature and felt no need to adjust the back grate pressure in its objective of bringing down the cooler temperature. The plant operator overrode the control and shut off the AI model.

An experienced plant control would have immediately responded to the increasing back grate pressure as it is detrimental to the cooler’s operation. How did the AI model miss this?

In his 30 years, Ernesto never had to wait for the grate pressure to build up before reacting. He just knew when the pressure would build up and proactively controlled the parameters to ensure that the grate pressure would never cross a safe bound. By merely looking at the data, there was no way for the AI engineers to determine this. The data alone without context would tell you that the grate pressure would never be a problem.

Bias hurts AI systems in many ways. The biggest of all is that it takes trust away from these systems. On top of watching his workers and equipment, Ernesto will have to watch the AI models. He has to teach the system to do things differently, which the system then has to learn. The next versions will improve. This will always be a problem when we model AI systems purely from incomplete or inaccurate data. In industrial IoT settings, this will always be the case because data will be inaccurate or incomplete.

As technology builders, what does this mean for us? How do we realize the full potential of industrial AI systems? The answer lies in us starting to design these systems with empathy and taking a thoughtful approach:

  • We cannot assume that data is a complete representation of the environment we are aspiring to model.

  • We need to spend time doing contextual inquiry — a semi-structured interview guided by questions, observations and follow-up questions while people work in their own environments — to understand the life of the workers who we are trying to empower AI systems with.

  • We need to assess all the possible scenarios that could occur in the problem we are trying to solve.

  • We need to always start with a semi-autonomous system and only transition to fully autonomous system when we are confident of its performance in production environments.

  • We should continually adapt and train models to learn about the environment we are operating in.

Bringing AI into factory settings is more than just technology. It is about people. It is also about doing something with empathy and understanding the people whose lives the technology is going to touch.

On AI democratization

In June 1993, NCSA Mosaic was launched. It was one of the first graphical browsers that was instrumental in popularizing the world wide web. It had a clean user interface and ran on Windows. It brought the power of internet mainstream, and became truly a killer application.

Browsers democratized internet. What will democratize Artificial Intelligence (AI)?

Today, many companies are working on AI platforms. Several companies are claiming to (or wanting to) democratize AI. What does this mean?

The internet provides us infrastructure to create and consume information. By doing this, it lets us collaborate and forge stronger communities. The browsers made this easier and brought the value of internet to everyone with access to a computer. Democratization, in this context is access to internet. Another way to understand browsers is that they allowed internet to be used effectively.

Working on the same parallel, the first thing we need to understand is the purpose of AI. Why does AI exist and what does it enable in this world, which if left untapped, causes human potential to be unfulfilled?

A common understanding of the above question might be the clue to understanding how to democratize AI.

A text book definition of AI is along the lines of creating agents that achieve their objectives by performing a sequence of actions, or exploring a sequence of operations. Machine Learning, which is often confused with AI these days, is only an aspect of AI where data from real world is used to train an AI system on some truth. 

Machine learning (ML) has a clear purpose of advancing human decision making capabilities based on prior evidence or data. For this reason, ML platforms will continue to be successful. At some point not in distant future, we will see a platform that will truly make ML mainstream. It will be similar to what NCSA Mosaic did for world wide web. Some argue that the current ML platform tools and frameworks have already brought ML mainstream. I don’t think it is true. An ML platform that truly abstracts the technicalities and focuses on a core human purpose will help democratize ML. A platform that truly understands and improves human productivity might be the killer app for ML.

What is the purpose for AI? Along the same lines, we can safely assume that like with every technology, our intent is to advance the human race and elevate it to its full potential through AI. If ML gives us superhuman capabilities to observe the world and make decisions based on it, AI might leverage that learning to make decisions on our behalf. 

The last point captures both the promise and peril of AI. While the prospect of observing the world and taking actions (that fulfill our objective) is a thrilling idea, it puts onus on us to architect objectives that are aligned with our human values and potential. It requires us to choose well and be aware of the implications of our choices. 

Then, can an AI platform essentially be a value framework that ensures that we don’t mess up? Can it be something that reminds us to construct objectives that are aligned with human values? Can there be a browser equivalent for an AI platform that lets people consume, create and collaborate on shared objectives that makes us better human beings? 

However, before such a product manifests, several things need to happen. We will have to put some basic infrastructure in place to support the creation and growth of such AI systems in our society. Tactically, we might need to create easy ways to consume and contextualize any data we interact with. This will need standard interfaces. Essentially, we will have to develop some protocols and shared language around how we understand these systems. In the process, we will create and optimize a wide array of workflow tools that allows us to build ML algorithms without writing code. An interesting argument can be made here that, if we truly mature in creating such ML frameworks and allow machines to design the right workflow and/or algorithms for solving an objective in the presence of reasonable constraints, we might be talking about the beginnings of a true AI system. Such a system would be able to identify a problem, explore data relevant to that problem, train itself in decision making and make decisions.

That might be one path towards AGI.

Special thanks to Eric Xing and Devin Sandberg for reading this article and providing feedback.