Manufacturing is among the few domains in which “AI” can yield tangible, indisputable results. No vanity metrics. Real results. Decrease downtime. Decrease scrap. More yield. Quicker changeovers. Output is more predictable. Reduced energy cost per unit. Steadier schedules. Decrease last-minute firefighting.
It’s certainly one of the easiest places to bleed money on AI.
Since factories aren’t data labs. They are cyber-physical systems. Machines will drift. Sensors will fail. Operators will improvise. Materials change. Constraints will collide. An “accurate” dashboard model can still be useless if it doesn’t mesh with how the actual production teams make decisions on a Tuesday at 7:00 a.m., when a line is down, and orders are late.
So this is an article for the builders and buyers who need the real playbook: the top AI solutions that actually work and consistently deliver value in manufacturing, and what you need to know—technically and operationally—to deploy them without unleashing “smart chaos.”
I’ll dive deep into three clusters because they are at the heart of manufacturing AI value: predictive maintenance, quality control, and production planning. But I will also talk about the adjacent layers you must consider: interoperability, data standards, energy and sustainability, and OT cybersecurity risk. Because these are separate subjects in a traditional factory. They’re the same system.
Before we get underway, two facts establish the tone for everything that follows.
First, manufacturing performance isn’t measured by “AI accuracy.” Manufacturing KPIs such as OEE, availability, performance rate, quality rate, throughput, cycle time, scrap, rework, schedule adherence, and unexpected downtime are how it is assessed. And ISO even standardizes manufacturing operations KPIs in the form of ISO 22400, which also includes increasingly commonplace definitions, such as those for MOM / MES contexts.
Secondly, manufacturing AI relies on the confluence. Your AI solution needs to be able to read data from machines and write decisions back into the systems that people interact with (MES/MOM, CMMS/EAM, SCADA, ERP), or it’s just a PowerPoint exercise. There are standards such as ISA-95 (IEC 62264), which are specifically intended to describe enterprise-to-control integration layers and the information models to be used to exchange information. On the industrial data level, OPC UA (IEC 62541) is increasingly being cited as the standard for interoperable, secure exchange of automation data.
With that framework in mind, let’s look at the ten solutions.
Predictive maintenance (PdM) is the most well-known AI-based use case in manufacturing, but it is also the one most misinterpreted.
The naive version is by far “predict failure.” The actual one is: “predict risk early enough to act, in a manner that decreases overall cost, and that does not cause production to be stopped.” So this is what PdM needs to be: a decision system, not just a model.
An actual PdM program is typically established on a small subset of high-value assets. Bottleneck machines that halt the entire line. Equipment with costly failure modes. Machines with longer lead time parts. Assets that produce safety risk upon failure. You then instrument those assets with condition data that really correlates with failure. Vibration, temperature, current draw, pressure, acoustic signals, lubrication analysis, motor signatures, and event logs from PLCs. This is also where a lot of teams stumble: they try to do PdM with whatever data is most readily available, not with the data that contains signal.
There are two general approaches to modeling. One part is supervised prediction, where you use past failure labels to predict time-to-failure or failure probability. The other part is anomaly detection: you learn “normal” patterns and tag deviations. In practice, the vast majority of plants begin with anomaly detection since failure labels tend to be very sparse, noisy, or not uniformly recorded. Maintenance records might say a “bearing was replaced,” but not if it was actually failing or replaced during a scheduled shutdown. Your model is only as good as your labels, and manufacturing labels are frequently weak.
This is when ”asset management discipline” becomes part and parcel of AI success. If your plant already operates under a well-defined asset management system (rigorous asset hierarchies, standardized work order coding, standardized failure mode taxonomies, etc.), PdM is even easier. ISO 55001 is the asset management system standard to demonstrate that asset decisions are aligned with organizational objectives and that this alignment is demonstrated and maintained throughout the asset lifecycle. It is not “something to do with an AI standard,” but it is the type of operational infrastructure that makes AI outputs useful.
Well, what does real-world PdM at scale look like? It tends to be condition monitoring with decision workflows on top. Schaeffler, for example, characterizes OPTIME condition monitoring based on vibration and temperature signal analysis for predictive maintenance. That is a hint about the pragmatic pattern: you don’t need one magical model, you need good sensing, you need repeatable signal processing, and you need a workflow that converts signals into actions.
And now for the tough question: what do you do when risk increases? PdM does not add value until it alters decisions. That involves connecting the PdM system to your CMMS / EAM so it can generate a recommended work order, recommend a maintenance window, suggest spare parts, and trigger a safety check if necessary. If your PdM tool just emails somebody a chart, you’re not doing PdM. You’re doing analytics.
Finally, PdM has to be kept under watch like any production system. The sensor will drift. “Normal” after a retrofit or process change. Calibrate a model for one operating regime, and then you’re wrong in another. So, you do want monitoring of input distributions, alert volumes, false alarm rate, missed failure rate, and impact on downtime.
Manufacturing is straightforward: the best PdM is dull. It is not flashing lights and sirens. It silently brings down unplanned downtime and the predictability of maintenance.
Predictive maintenance is informing you that the ”risk is rising.” Prescriptive maintenance is informing you of “what to do about it.”
This is also where AI projects frequently terminate prematurely. They assess risk and then relinquish the decision to humans, but without context. But the true cost of failures in manufacturing is not just the fixing. It is the production disruption. The overtime. The expediting. The shipments that didn’t get out. The quality drift that follows a stressed restart. Therefore, the best predictive maintenance AI systems integrate health predictions with production constraints and suggest a minimally disruptive schedule.
A prescriptive maintenance system usually involves three ingredients.
The first component is a health model: a risk score or an estimate of remaining useful life for certain components. The second is a model of production constraints: when can the machine be stopped, what downtime windows are available, which other machines can take on the load, which orders are urgent, and what changeovers are scheduled? Third is an optimization layer: select the optimal intervention plan under constraints that minimizes expected total cost.
Hence, manufacturing AI is not just ML. It is ML + optimization. If you’re doing only ML, you will often advise maintenance at the “optimal” time for the machine and the worst time for the plant.
It’s also where spare parts can be elevated to a priority decision. Maintenance scheduling without parts scheduling is a mirage. If your model indicates a failure in two weeks and the part lead time is six weeks, you’ll still have a problem. Consequently, prescriptive systems also interface with MRO inventory planning and can recommend early ordering as risk goes beyond a threshold.
Your metric here should be changing from “did the model predict failure” to “did the system reduce unplanned downtime and total maintenance cost without harming throughput?” In ISO 22400 terms, it is usual to refer to this as increased availability, decreased downtime loss, increased OEE, and increased schedule adherence.
Prescriptive maintenance’s most prevalent failure mode is to overlook the human element of workflow. Maintenance planners and production supervisors are already negotiating daily downtime windows. If your system produces a plan that cannot be negotiated or does not justify its rationale, the system will be ignored. Explainability in manufacturing isn’t a philosophical thing. It’s practical: “What signal triggered this?” “How confident are we?” “Which failure mode do we suspect?” “What is the recommended window?” and “What is the consequence we expect if we delay?”
A good prescription system doesn’t battle planners. It offers them better choices than previously.
The second pillar of manufacturing AI value is quality inspection. And it has one big advantage over many AI projects: it is usually quantifiable in a short time.
When you catch defects earlier, you decrease rework, scrap, w/c claims, and customer dissatisfaction. You also often reduce the amount of time humans spend staring at parts, and that reduces fatigue-related misses.
But the real problem in manufacturing is that manual inspection doesn’t scale with complexity. With increasing product variety and tighter tolerances, humans are having a hard time consistently detecting subtle defects. Traditional rule-based machine vision also has difficulties, as it needs computationally heavy hand tuning and breaks under changes in lighting conditions or product variants.
Vision based on deep learning, however, is not because it can learn defect patterns from labeled images. It can even be used in some situations to do more than just identify defects, but rather identify patterns that lead to defects before they become visible.
A very tangible, believable example is the AI-based quality check at the BMW Group Plant Regensburg, which was developed with the startup Datagon AI. BMW called it “AI as a quality booster” and said it was intended to support quality inspection in production. BMW also highlights its more general “Vision and Sound Analytics” for work on AWS with a massive scale of images and audio for AI innovation. Real-time detection of defects using computer vision models: NVIDIA’s BMW case study also presents inline detection as part of production optimization, emphasizing the “milliseconds” nature of inline detection.
These sources converge on the same base architecture. You take pictures (or video) at fixed points on the line. You want to run inference at the edge to get low latency so you can decide before the part moves on. You keep a subset of images for traceability, model improvement, and root-cause analysis. You integrate with MES so that defect events are associated with serial numbers or batch IDs. You then take that and feed it back into process improvement.
Now the hard facts.
First, AI for inspection is not “train once and forget.” Defects change. Suppliers change. Lighting changes. Camera lenses get dusty. A model that works in a pilot (model in production) can drift gradually until it becomes so noisy that people stop trusting it. So you need a constant monitoring of false positives and negatives, and a quick loop of model update with new samples.
Second, the quality inspection systems break down with a poor data labeling scheme. You need a precise definition of what a defect is, and what an acceptable variation is. There are plenty of borderline cases in manufacturing where the decision depends on the context of the situation. If you label inconsistently, the model learns confusion. The remedy is more strict labeling guidelines, multi-review labeling on borderline cases, and a feedback loop from downstream quality outcomes.
Third, you need to manage the ‘false reject storm.’ Models that produce too many false positives can clog the line, overwhelm inspectors, and slow production. So deployment is usually a phased approach: run the system in shadow mode, compare to human inspection, then as an assistive system, and finally as an automatic reject trigger, often with confidence thresholds and human review for borderline cases.
Quality AI is successful only when it is integrated into the process control and not a one-off camera project.
Vision inspection tells you that a defect occurred. Predictive quality is an attempt to tell you that a defect is going to occur.
This is why manufacturing AI gets truly strategic, because it transitions quality from detection to prevention.
Predictive quality models leverage process signals to predict the probability of defects prior to part completion. In discrete manufacturing, for example, torque curves, temperature profiles, press force profiles, vibration signatures, robot path deviations, and signals from tool wear are used. In process industries, they could be flow rates, pressures, chemical composition, pH, and temperature gradients. In electronics, it could be solder paste volumes, placement deviations, oven temperature profiles, and AOI measurements.
The output is not “this part is defective.” The output is “this process is drifting into a risk zone.” That’s fundamentally different because it lets you make corrections. Change a tool. Recalibrate a station. Adjust a recipe. Stop the line before you make 2,000 bad units.
The largest technical challenge here is causal confusion. Manufacturing data is riddled with correlated variables. When the defect rate increases, a lot of variables can change in tandem. A naive model will latch onto the wrong proxy. You want careful feature engineering, domain constraints, and often causal reasoning or controlled experiments to discover true drivers.
This is also where traceability comes into play. If you can’t link a defect to the specific machine parameters, tool ID, material batch, operator shift, and environmental conditions at the time of occurrence, your root-cause analysis will be guesswork. The most predictive quality systems treat genealogy and context as first-class data, not optional metadata.
This is also where standards-driven integration plays again. ISA-95 (IEC 62264) defines the exchange of schedules, performance, and quality information between enterprise and manufacturing operations management over layers, in layers. If your data model follows these models, it’s easier to associate quality events with the production context.
With real data from plants, predictive quality is most effective if applied within the context of traditional SPC thinking. You don’t throw away control charts. You upgrade them. You maintain human-readable control limits and introduce model-driven risk signals that identify multivariate drift sooner than univariate charts can.
The failure mechanism is clear: models that are too “black box” for process engineers to have confidence in. A predictive quality system surfaces the top contributors and how they compare to historical normal ranges. It also enables “what if” scenarios—if we bring parameter A back into range, what happens to predicted defect risk?
This answer is where quality teams stop reacting and begin controlling quality upstream.
Some of the most costly defects are invisible.
A bearing beginning to fail, a motor pulling abnormal current, a gearbox with early wear, a weld that sounds different, a leak that begins small, a press that “feels” wrong, a fan that’s a little unbalanced. Many of these problems are first detected by people via sound and feel rather than images.
Today’s factories can encode that intuition as multimodal sensing: acoustic sensors, vibration sensors, thermal cameras, and current sensors. Then models learn patterns that are associated with specific failure modes or quality drift.
BMW’s “Vision and Sound Analytics” study is a valuable public indicator that suggests that the big manufacturers consider both audio and visual analytics as a legitimate AI data stream, rather than gimmick streams. The more fundamental point is that audio and vibration are very high-signal sources for rotating machinery and certain process anomalies. They also frequently identify problems sooner than a simple threshold alarm, because the pattern changes before the absolute value crosses a limit.
From a technical standpoint, these systems frequently utilize frequency-domain features, time-frequency representations, and learned embeddings. They can be trained as classifiers when events are labeled or as anomaly detectors when labels are scant.
In terms of operation, it is most effective for early warning with the possibility of taking action. If one gets a sense that a machine is going off the rails, you have to connect that to a particular component hypothesis and suggest steps for inspection. Otherwise, the operators will treat it as noise. This is why explainability is important. “Vibration anomaly” is too vague. Bearing frequency band drift + temperature rise + current draw shift are actionable.
This class is often the “bridge” between maintenance and quality. When process vibrations move, you will tend to get both a higher defect rate and a higher potential for maintenance. A multimodal anomaly system can therefore establish a shared truth across teams that historically blame each other.
The core implementation trap is sensor placement and calibration. Garbage sensor installation makes garbage model outputs. That’s not software-only work. It is a work of industrial engineering.
Predictive maintenance pertains to asset reliability, and good quality AI pertains to scrap and rework; production planning relates to time—throughput, cycle time, lead time, and schedule adherence.
The majority of manufacturing scheduling remains a battle between two ideologies. ERP generates a plan that is good for finance and purchasing. Shop floors operate in the real world: machine constraints, tooling availability, sequence-dependent changeovers, labor constraints, quality holds, rework loops, and rush orders that show up late.
That’s why ISA-95 still matters. It is intended to establish boundaries between the business planning (ERP layer) and manufacturing execution (MES/MOM layer) so that schedule and performance information can be exchanged in a standardized manner. When a plant lacks a clean enterprise-to-floor interface, a schedule translates into a human task. People print schedules, modify them, and lose a trace of what they changed.
Constraint-based scheduling constitutes the practical core of AI scheduling in most plants. This is not to say that it is looking for a perfect global plan. The purpose is to create a viable plan that respects constraints and leads to a predictable execution. Constraint programming is well established in scheduling, and the recent literature still surveys its broad use in production scheduling problems.
A contemporary scheduling system is often a combination of deterministic constraints and predictive signals. Deterministic constraints include machine eligibility, routing steps, setup times, batch sizes, tooling constraints, labor shifts, maintenance windows, and quality inspection holds. Predictive signals are estimated cycle times under current conditions, potential downtime risk, expected throughput as yield, and expected rework as probability. That’s where ML helps. It turns “average cycle time” into “cycle time distribution for today’s conditions.”
Digital twin-based methods are becoming more and more popular in this context, since simulation can evaluate schedules with respect to variability. Studies on digital twin–enabled scheduling stress the fact that schedules at average rates usually differ from the real production and point out that digital twins can aid in estimation, scheduling, and monitoring in real time. Recent survey works also address how digital twins and AI can enable real-time, uncertainty-aware scheduling.
Now the key crucial practical problem: stability of the schedule.
Factories require stable schedules. Not scheduled to be adjusted every quarter of an hour. So a good scheduling system has stability constraints and “freeze horizons.” It permits re-optimization, but penalizes changes that disrupt work-in-progress. This is the kind of system that operations adopt rather than ignore.
Implementation problems are predictable. The most frequent one is partial capture of constraints. A schedule that disregards a tooling constraint or a quality hold is infeasible. When people observe that impossible schedules keep being delivered, trust crumbles. So the most successful deployments are often those that make a major investment in constraint discovery by literally sitting with planners and supervisors to capture real rules and encode them in a configurable constraint model.
This is based on where AI can become the nervous system of a factory—if it is designed with due respect for constraints and human workflow.
Manufacturing has a unique problem that software doesn’t have: changes are costly.
A new line, a new product, a new robot cell, a new process recipe, a new layout. A physical change cannot be “rolled back” easily. So you need an environment where it is safe to test. That’s what manufacturing digital twins is for.
The digital twin for manufacturing is more than just a 3D representation. It is a joint product of product, process, and resources models that are linked to each other, linked with real operational data, and has simulation and in some cases control capabilities. ISO 23247 defines general concepts and requirements for a manufacturing digital twin. NIST has released a commentary on the ISO 23247 series, considering it as a generic framework that is applicable to a wide range of manufacturing processes. These sources are relevant because they affirm that “digital twin” is not just a marketing buzzword; it is converging industry language for how you organize digital representations of manufacturing systems.
Digital twins deliver value in three ways in the real world.
They enable virtual commissioning. You test your PLC logic and robot programs against a simulated cell before you run them on real equipment. This lowers startup bugs and safety incidents. They enable capacity and layout planning. You can ‘simulate’ flows, bottlenecks, and ‘buffer’ behavior before you move the plant around. They facilitate scheduling validation. You can test schedules against simulated variability, rather than just guessing.
Many of the big industrial vendors and manufacturers are now stating this direction publicly. Siemens publishes extensive content on digital twins for industry and delivers a digital twin white paper that positions the “comprehensive digital twin” from product to production. Recent press releases and industry news also illustrate digital twinning for piloting plant modifications and processing optimization. For example, PepsiCo and Siemens are using digital twins to validate new layouts and to increase capacity and line throughput, Manufacturing Dive wrote.
The most important reality to implement is calibration. A twin that assumes perfect machines are always available and humans are perfect will overpromise. You have to feed the twin with actual distributions: distributions of downtime, distributions of cycle time, defect rates, rework loops, and variability of material arrival. Then the twin becomes a decision tool rather than a presentation model.
Building a product in this space, your differentiation will be based on realism and speed. Many tools can render a factory. Fewer tools can do it with believable simulation, connected to live data, and providing actionable decisions in time for daily operations.
Energy has become a strategic production ingredient. It’s not just a cost item; energy stability and emissions reporting are important to customers, investors, and regulators.
Factories use energy in complex ways—base load, peak load, shift patterns, batch cycles, furnace cycles, compressed air, HVAC, chillers, and utility networks. These systems also tend to be inefficient because they were engineered for maximum load but are used at variable loads.
There are two real practical ways in which AI helps here.
Firstly, prediction and outlier detection. Predict energy demand, identify abnormal consumption, detect compressed air leaks, and identify equipment running out of spec. Secondly, constrained optimization. Energy-intensive processes can be shifted to off-peak periods, equipment schedules can be staggered to minimize peak demand charges, settings can be optimized within quality constraints, and utility systems can be synchronized to minimize waste.
ISO 50001 is the energy management system standard that enables organizations to establish the systems and processes necessary to enhance their energy performance systematically. This is important because energy optimization is not a one-shot model. It is a continuous management loop—plan, do, check, act. AI can speed the loop up, but the loop needs to exist.
The most common path to failure here is to optimize energy subject to processing quality constraints. You can’t cut furnace energy by lowering the temperature if you’re changing material properties. You can’t reduce cleanroom HVAC energy if it compromises contamination control. So energy optimization has to be associated with process constraints and quality thresholds.
Consequently, the best systems are those that combine process knowledge with model-based control and learning. They define “safe operating envelopes” and operate optimally within them. They include trusted measurement, as manufacturing teams know it’s easy to exaggerate energy-saving claims if you don’t normalize for throughput, product mix, and seasonality.
If you need a manufacturing AI with a fast payback, energy management is often underappreciated—particularly in energy-intensive sectors like metals, chemicals, and food processing.
This may sound tedious, but it is where many manufacturing AI solutions either take off or fail.
You can’t optimize what you don’t measure consistently. And measuring manufacturing is unexpectedly difficult. Machines send signals, but signals are not events. A PLC bit flip is not “job started.” A cycle count increment is not a “good part produced.” You want semantics.
ISO 22400 considers the definition of common KPIs for manufacturing operations management; that is, it defines formulas and properties of common KPIs utilized in practice. That’s helpful, because it gives you a common language for performance. When a million experts are saying what “OEE” should be, no one is improving.
At the systems level, the existence of ISA-95 (IEC 62264) is justified by the need to integrate the enterprise planning level with the shop floor control level using uniform models and interfaces. And in machine connectivity, OPC UA (IEC 62541) is often cited as the standard for secure and interoperable exchange of automation data and semantic interoperability.
Why is this important for AI? Because all of the above AI solutions rely on clean, time-synchronized events—when a job started, when it ended, what material batch was used, what machine settings were applied, what quality checks were passed, what rework loops occurred, what downtime reason codes were logged, and what maintenance actions occurred. Otherwise, noise learns your models, and your decisions to optimize are taken on a wrong state.
Traceability becomes critical at this stage as well. If you can link each unit or batch to genealogy—material lot, equipment, operator, recipe version, inspection results—you can perform real root-cause analysis when defects arise. You can also perform targeted rather than mass recalls, which is a huge hit to cost and trust.
At many facilities, the initial “AI win” isn’t a sophisticated model. It is creating a clean data layer that enables models down the line. Researchers often underestimate the scale of this step. It is the essential practical side: This is where most industrial value is generated, by making performance visible, consistent, and actionable.
It is well known that factories are becoming digitized; their surfaces are exposed.
More sensors, more connectivity, more remote access, more vendors, more integration points. That’s great for productivity, but it increases the risk to cybersecurity and safety. And manufacturing cyber risk is not like consumer cyber risk. OT incidents can lead to downtime, scrap, safety incidents, and physical damage.
This is why industrial cybersecurity standards are important. The ISA/IEC 62443 is considered the de facto standard for industrial automation and control system security. Cisco’s IEC 62443 content explains how risk assessment methodology and security levels relate to industrial control systems. An update was also released by ISA to ISA-TR62443-2-2-2025 (Part 2-2) for a defense-in-depth security protection system for industrial automation and control systems. The IEC has also recently announced a new milestone standard that deals with applying IEC 62443 requirements for the industrial IoT, and there can only be development and extension around these standards as IIoT grows.
Now apply AI to this setting. AI systems may bring with them additional risks, including incorrect recommendations, unstable automation, non-transparent decision logic, novel attack surfaces, and challenges to data integrity. That is precisely why there are AI risk governance frameworks such as NIST AI RMF that exist—to enable organizations to factor in considerations of trustworthiness during the design, development, and use of AI systems.
In production, the most important governance question is “what activities does the AI have permission to carry out?”
A predictive maintenance model that just notifies a planner is low risk. A production scheduling system that automatically reprioritizes jobs is medium risk. Closed-loop control systems that automatically change setpoints can be high risk due to their influence on product quality and safety. A generative AI “industrial copilot” that writes PLC code or recommends control logic can be high risk if it is not tightly constrained and reviewed.
That isn’t theoretical. The industrial sector is actively moving toward copilots and AI-assisted engineering. Siemens portrays “Industrial Copilots” publicly as “generative AI assistance across the value chain” that makes collaboration and cycle times simpler and faster. Siemens outlines at CES 2026 the vision to start building AI-driven, adaptive manufacturing sites as of 2026, with a Siemens electronics factory serving as a blueprint. Taken together, these announcements suggest a future in which AI more directly engages with engineering and operations. That means that governance, human review, and auditability are non-negotiable.
A mature manufacturing AI program, then, has cyber and safety controls baked into the design: segmented networks, least privilege access, strong identity controls, secure update processes, audit logs, and well-defined incident response routines. It also has a data integrity mindset. If a bad actor can alter sensor data, they can alter AI decisions. So you treat data provenance and validation as security.
And finally, this solution offers a further layer of risk surveillance at the operation level: monitoring for anomalous net traffic, unusual PLC behavior, unusual machine commands, and unexpected shifts in production patterns. The objective is not to stifle innovation. The purpose is to keep innovation safe.
If you look carefully, you can actually observe the dependency graph.
Predictive quality and predictive maintenance require clean event and machine data. Scheduling is a function of constraints, cycle time distributions, and maintenance windows. Calibrated distributions and accurate system models are the basis for digital twins. Vision quality depends on traceability and feedback loops. Process constraints and a reliable measurement system are important in energy optimization. Cyber function risk is a function of segmentation, integrity, and governance.
That’s why success in manufacturing AI is rarely one model. It is about constructing a unified decision operating system.
The quickest route to build this coherently is the “edge + integration + governance” mindset.
Edge, since manufacturing involves many decisions that must be made with low latency and high reliability, and because it is often not practical to send every sensor event to the cloud. Integration, since ISA-95 and OPC UA exist for a reason: factories aren’t flat, and AI can’t just jump around—it must flow through the various layers in a managed way. Governance, as standards such as ISO 23247, ISO 55001, ISO 50001, IEC 62443, and NIST AI RMF are around, because bold statements from complex systems need a common language to stay trustworthy.
If you’re assessing an external engineering team to build any of this, don’t evaluate them on their ability to build an “AI app.” Evaluate them on whether they can build industrial-strength decision systems: constraints, integration, monitoring, rollout discipline, and managing change with operators. In manufacturing, a system that doesn’t earn trust is never used, no matter how sophisticated the model is.