« Back To Artificial Intelligence Industry Insights
Association for Advancing Automation Logo

Member Since 1974


Content Filed Under:



ChatGPT and Manufacturing: How Generative AI Will Change Industrial Applications

POSTED 05/08/2023  | By: Nick Cravotta, A3 Contributing Editor

Generative AI promises to be one of the most disruptive technologies in recent decades.

Consider how, with just a prompt, ChatGPT is capable of writing everything from poems to complex papers on nearly every topic. Even more astonishing, AI tools like MidJourney can create complex images mimicking a famous artist’s style from just a handful of words.

These platforms are raising concerns that they could replace not just writers, artists, and musicians, but even engineers. After all, ChatGPT doesn’t just write essays. It can provide code in a variety of programming languages. Also, there are questions about trust. Can we have faith in these system’s accuracy and ethics?

But ChatGPT, large language models, and other generative AI platforms have the ability to change how we interact with technology and provide innovative solutions to our most pressing problems. Can AI help us find new cures for diseases, cut back on our waste and energy usage, and free us from many of the mundane routines that absorb our daily work lives?

Generative AI will certainly have an impact in the factory and other industrial operations. It could change the way we dig into our data. Imagine being to talk to your digital twin the same way you ask questions of a colleague. What if you could ask an AI to take the first pass at designing a robotic cell for your factory? This is just some of the potential that AI can unlock in manufacturing.

A (Very) Brief Overview of Generative AI

According to ChatGPT,[1] “AI refers to the broad field of creating machines that can perform tasks that typically require human intelligence…Generative AI is a subset of AI that focuses on creating machines that can generate original and creative output…”

As such, generative AI isn’t a new technology. Eliza, one of the early predecessors of ChapGPT, was an early natural language processing computer program developed from 1964 to 1966 at MIT by Joseph Weizenbaum. Just type a question, and Eliza would provide an answer.

From an engineering perspective, computers have been writing code from prompts from their inception. A compiler, for example, takes words and converts them into bits a machine can understand. However, it was when Texas Instruments came out with the first C-based digital signal processor (DSP) compiler that programmers saw a major shift in how they wrote code.

The complexity of a DSP’s internal pipeline dependencies reached a point where a C compiler could consistently create more efficient code than a person could and in much less time. There are many other examples of generative AI in engineering.

The important point here is what makes generative AI a big deal today. Suddenly, computers went from assisting people in creating written copy, images, and code to seemingly doing almost the entire job. This has led to a major shift in the creative process from telling a computer “how” to do something to prompting it with “what” you want done. Instead of writing the actual words, you can tell ChatGPT what kind of paper you want written. Instead of drawing lines, you can tell MidJourney what you want an image to look like.

And soon you might be able to tell a robot or piece of machinery what you want it to make instead of having to program each step in the design process. Well, almost.


AI, in many ways, is about uncovering patterns and repeating them. So while ChatGPT can write a poem, ChatGPT doesn’t actually understand the poem. Nor does MidJourney know what it is drawing. This is one of the major limitations of generative AI: the platform doesn’t know what it has created or whether what it created is accurate or true.

Take the case of CNET using generative AI to write articles. Several of the articles created using AI required substantial corrections. The problem here is that CNET published the articles without first reviewing them. And this is the foundation of using generative AI, Rule #1 if you will: Don’t just use what a generative AI outputs. You need a person, often an expert, to review the output to confirm that it is adequate for the purpose intended.

One way to think of it is to assume generative AI can get a good part of the job done, but perhaps not all of it. Whether the AI gets you 60% or 85% or even 99% of the way there, depending upon the application, you’ll still want to confirm and verify the output yourself. For example, if the output is code, a programmer needs to perform this review.

This means humans aren’t going to be replaced any time soon. At the same, however, AI is already significantly changing how we create, design, and work.

The Coevolution of Humans and AI

Holger Kenn, director of Business Strategy for AI and Emerging Technologies at Microsoft, talks about “the coevolution between what AI is doing and what is considered a human ability. As AI technology progresses forward, this in turn changes our way of thinking about what people can do.”

To put this in context, digital artists used to create images by building up shapes or manipulating other images. With MidJourney, digital artists can now begin creating an image by “drawing” with a prompt, using a description of the image using words. This is similar to what an art director does. An art director tells the artist to create an image and the artist creates it.

With generative AI, an artist initially takes on the role of an art director by telling the AI what to create (via the prompt). Then the artist completes the piece.

We can expect similar changes to the process of industrial manufacturing.

Generative AI in Industrial Manufacturing

Consider the design of a robot. The lead engineer defines what the robot needs to do and the engineering team brings that definition into reality.

With generative AI, however, rather than beginning with low-level design details, the team will be able to start designing by telling the AI what the robot should do. The AI will then provide a base design which the team will first verify, then build on, to create the final robot.

We’re aren’t there yet. This type of generative robot cell design is still a ways, but design tools are quickly evolving in a way that will allow engineers to work at a higher level while the AI works out the lower-level details.

Microsoft’s Kenn believes that the first major areas that generative AI will impact design are user interfaces and code generation. One of the factors that has contributed to the sudden rise of generative AI technology is natural language processing. In simple terms, natural language processing allows a person to interface with a computer using full sentences, just like they were talking to another person.

Consider the use of digital twins today. Digital twins are a virtual model of your production line or your plant – perhaps even your whole supply chain that mirror how these processes work in the real world.

“ Technologies such as digital twins and time-series databases represent the structure and storage of data from manufacturing systems,” Kenn said. “They [are] quite useful when you know the domain in which they are structured.”

Today, if you are looking to extract data from your digital twin – or use it to test out a new process – you’ll need an expert to interact with the model for you. It’s kind of like creating a web page. If you don’t understand the tools, you have to have a web expert take your content and create the page for you.

“With generative AI,” says Kenn, “non-experts will be able to interact with digital twins using natural language.” In other words, non-experts will be able to tell the AI what they want to learn, similar to how they would tell an expert. “There are many advantages here.

“Because users will be able to talk naturally, the experience will be more interactive and intuitive. They’ll also be able to iterate much faster.”

This last advantage is an important one. Design often requires iteration, where each round of design provides insights that impact the next iteration. Developing an iteration can be expensive, in terms of cost and turnaround time.

Imagine iterating a factory layout. It takes time for a person to complete each layout, then evaluate it and decide upon the next iteration. Teams may also be limited in the number of personnel or hours they can dedicate to a particular project.

With generative AI, iteration can be accelerated significantly. A person describes what is required, and the AI creates a digital twin, potentially several, each focused on optimizing a different factor. The twins can be evaluated. You can then tell the AI how to improve or modify each twin.

The process is similar to how a lead engineer would direct a team to design the next iteration. The difference is the AI can be faster, explore more variations, and not get annoyed that you want to make yet another iteration.

Code generation is an important capability that will not just assist designers in building better systems but also help maintain existing systems and reduce a company’s technical debt. Code generation turns a human language description into a code asset.

“Today, you only employ someone to implement a digital twin when you can justify it with a high ROI,” Kenn says. “With AI, you can write code for assets no one is writing code for anymore. Imagine trying to find someone who can write a driver for a 30-year old Modbus implementation used by some of your equipment.”

Kenn sees this as a key advantage for companies. “AI isn’t about replacing programmers,” he says. “It’s more about bringing more assets into the system, assets that no one can justify from an ROI perspective.”

Kenn believes that initially, people will need to be fairly detailed when telling the AI what code to write. However, over time we’ll be able to feed the AI datasheets and other documents to provide the data needed to allow the AI to handle most of these details itself.

Code generation is progressing with a similar evolutionary speed as text and images. Microsoft Copilot 365 is changing how people in business work and interact at the same time as Copilot is helping engineers program industrial systems.

Refinement and Discernment

Again, it’s worth noting that in both these use cases, generative AI can likely provide a good result but not one you can immediately take to production. It’s the flexibility and speed of an AI-enhanced process that enables the human team to reach a better result faster.

Another way to think of this: Generative AI will allow engineers to focus on refining designs rather than creating them from scratch. These engineers have the expertise to determine what is useful and correct -- and what is not.

 “The models today do not have a built-in understanding of the real world,” Kenn says. “There are certain things humans understand, like the passing of time, that seem trivial to us. A generative AI model will only know of these concepts as patterns. They know numbers, but they don’t know addition.”

Over time, AI will learn to refine its own output, providing better results. For example, earlier versions of MidJourney sometimes drew people with too many teeth. Current models have refined their output to correct this. Teeth were identified as a problem, and an expert model was created to address the issue.

We’ll see such evolving refinement over time in factory automation. For example, terms like “rack” and “controller” have different meanings, depending upon the context. Rather than create a new overall model for each context, the main AI model could reach out to an expert model that provides the necessary context.

This “expert model” concept extends to even more complex interactions.

“Consider asking an AI, ‘What is the current utilization of my factory?’ This question doesn’t have an inherent answer. So the AI needs to reach out to a digital twin to get the answer,” Kenn says.

Kenn believes this is a powerful tool for enabling AI models to generalize more easily. “What’s learned in one domain can be used in another.” Companies will be able to work from a foundational model, one that supports natural language processing for an interface, and then pulls in the domain expertise they need through expert models.

Here’s a simple example of this in action. Imagine you have a broken part with a 3-pin connector you need to replace. Finding it in a catalog could be difficult if you don’t already know what it is. Now imagine if you could draw the part or take a picture and have the AI find the part for you. It’s like having an assistant to do the dull and tedious parts of the job.

Managing the Adoption of AI

Traditionally, industrial manufacturers tend to slow to adopt new automation technology. While AI is interesting, they prefer to let the major players adopt it and prove it out. When they do bring in AI, they introduce it with a top-down approach. Chances are, however, individuals within the company have discovered the value generative AI brings to problem solving. And many of them will have already started using it in the workplace.

 “We’ve seen this before. For example, people bought IBM PCs to run spreadsheets because they didn’t have access to the company mainframe,” Kenn says. “And companies will discover their employees are using AI even though they haven’t been given the tools officially. The most important question to ask if whether you’ll forbid this or embrace it.”

Generative AI can bring tremendous value to a company. But it has its drawbacks as well for which you’ll need to prepare. Here are some things to watch out for:

  • The Computer is Always Right: Some people treat computer output as truth. But while AI can solve many problems, it isn’t always correct. For example, ChatGPT was modelled using data from the Internet. Any errors – and the Internet is riddled with errors – are a part of ChatGPT and potentially part of its output. So, if someone posted “2+2=5”, even as a joke, you might get this result instead of “2+2=4”. Users need to be aware that the output is good but not perfect.
  • The Queen is Alive and Well: She is if you ask ChatGPT 3.0, because she hadn’t passed away yet when the model was created. Models that are fixed and static in time run the risk of losing their relevancy. For example, if equipment is moved on the factory floor, the digital twin will be inaccurate until it is updated. You’ll need a plan to keep critical models up to date, which might include using an emerging model-as-a-service capabilities.
  • It was Working…: At the same time, you don’t want models changing every day. That introduces a whole other set of potential problems, such as something that was working suddenly breaking down.
  • Language is Not Your Data: “The power of AI models is not in the information they contain. Rather, it is the capability to access this information through language,” Kenn says. By keeping the language model separate from the information models it allows you access to, you can continuously improve the ease with which users can utilize AI.
  • What I meant was…: Industrial systems will take advantage of models built on technical databases. This means that certain terms will have contextual meanings that differ from their natural language meanings. An embedding is an information-dense representation of the semantic meaning of a piece of text. In short, embeddings provide a powerful way to expand a natural language model into a specific industrial application context.

The Future of Industrial Manufacturing

There is great opportunity for AI and other innovation in industrial manufacturing. A primary barrier to adoption lies in monetizing such innovation. Says Microsoft’s Kenn, “Robots will eventually take over the factory, but they haven’t because building working robots is still extremely hard. The benefits of new technology must justify the investment. With generative AI, we could see significantly lower investment for new innovation.”

One area where generative AI has much to offer is in brownfield deployments, an area typically rife with technical debt. Consider the difficulty of connecting an old machine running a paper tape to the factory network so its operation can be automated, monitored, and optimized. The effort to have a person create software and hardware to supplant a single paper tape reader likely isn’t worth the cost.

However, using generative AI to convert the data on the paper tape into Python code and feed it to the machine over a network connection is already on the horizon. In this way, manufacturers will have a low-cost and low-risk way to update and modernize equipment, even if it’s just for a single station.

“What people need to remember is to use generative AI responsibly,” says Kenn. “You can’t just take the output and run your factory on it. You still have to assess whether the source is both valid and true. This is nothing new. People take risks all the time when they use a search engine to search for data and use it. You have to do the same when using generative AI.”

From this perspective, generative AI will not replace people. Rather, it enhances their capabilities, enabling them to do more, faster. And be more creative and innovative.

According to ChatGPT, “Innovation is the process of creating new ideas, products, or processes that are novel, useful, and valuable.” Certainly, generative AI will come up with lots of good, useful ideas. But how can a system that does not understand what it creates assess whether a particular idea is novel, useful, and valuable? And how do you determine which of these myriad of ideas -- while perhaps technically feasible -- simply aren’t worth acting on?

Bottom line: We still need our experts to understand what generative AI cannot.

[1] Prompt: “What’s the difference between AI and generative AI?”