Legacy systems, outdated devices, and old machines are still prevalent in many shop floors, and their usage is expected to continue in the near future. It can be challenging to justify replacing a functioning old machine with a new one solely for the purpose of connectivity and data collection. However, we all recognize the value and importance of understanding the machine's state and connecting these old machines to new systems that provide monitoring, alerts, traceability, and transparency to shop floor activities. This realization serves as an eye-opener.

The Challenge

So, what is the challenge we face? On one hand, we desire connectivity and the ability to continue extracting value from these machines. On the other hand, these are old machines that often do not support modern protocols like OPC UA and MQTT, and some cannot even be directly connected, no matter the effort. In addition, other, simpler machines, such as pumps or fans, simply do not have anything to read from, and yet, we would still like to see how and when they run, how much energy they consume, and even predict failures.

Revamping an outdated machine provides numerous benefits, such as enhancing product quality, increasing production throughput, and enabling effective monitoring and data collection. However, the process of introducing machine shop changes, like upgrading to a newer model, is far from straightforward. It disrupts existing workflows, necessitates employee training, and entails substantial costs. Moreover, from an investment perspective, recouping the initial machine investment requires a significant amount of time. As a result, it becomes imperative for us to foster creativity and explore alternative approaches to connect with aging machinery, rather than resorting to immediate replacement.

Connecting Legacy Machines to Modern Platforms

Surprisingly, these problems can be solved with new approaches and technology, and one such solution involves harnessing the power of sensors to generate data. Bridging the gap between outdated machinery and modern platforms is not as daunting as it may appear. Consider an older machine that predates the Industrial Internet of Things (IIoT) and cannot be connected using conventional monitoring and data collection methods. In such cases, the answer lies in a simple and cost-effective solution: sensors, specifically current and vibration measurement sensors that are versatile and compatible with a wide range of machines. To harness the potential of this sensor-generated data, we will employ the techniques of machine learning (ML).

[Teaser: You can also use cameras and OCR to connect to the machine screen, learn more in this blog.]

Machine learning has been leveraging sensor data for nearly two decades, with predictive maintenance being the most well-known application. However, there are numerous other applications as well. In the context of our discussion, our objective is to predict the state of the machine, whether it is ‘on’, ‘off’, or even in a more nuanced condition. For example, when monitoring an injection molding or CNC machine, we should be able to determine its status at any given time. Now, let's delve into the details.

Harnessing Sensor Data

By incorporating sensors, we gain the ability to gather valuable data from our aging machines. Changes in current and vibration levels throughout different process stages provide insightful information. Sensors act as a window into the present moment, delivering real-time data. For instance, during the heating phase of an injection molding machine, we can expect to observe high energy consumption (measured by the current sensor). Similarly, a vibration sensor would detect significant activity during the rough cut stage of a CNC machine.

Excellent! We have now established a means of extracting data from our older machines. The next step is to channel this data to a platform where we can derive meaningful insights from it.

This is where Tulip Edge devices and the Tulip Platform prove invaluable.

Leveraging Edge Devices

Sensors bridge the gap between the physical and digital realms by generating data. The subsequent step involves connecting the data stream from the sensor to a digital system, which can be achieved using edge devices like Tulip's EdgeIO. This device facilitates seamless integration of sensors with the Tulip platform. Equipped with multiple ports suitable for a wide range of sensors, Tulip's EdgeIO also incorporates Node-RED, a powerful tool worth exploring. In essence, Node-RED is a flow-based development tool used for connecting hardware devices, APIs, and online services in the realm of the Internet of Things (IoT). Its web browser-based flow editor empowers users to create JavaScript functions that help process the data.

Once we have connected the sensors to our machine on one side and to the EdgeIO on the other, we define the data flow on Node-RED, enabling us to store the data in Tulip. The next step involves utilizing this data to derive insights, generate reports, create real-time dashboards, and set up alerts.

Time Series Data and Machine Learning

After wiring the machine and establishing the data flow, the next crucial step is leveraging it in real time. Most sensors will generate time series data, representing an analog signal sampled uniformly over time to create discrete digital data. This data flows in real time from the sensor to the EdgeIO and into our cloud database. It is at this point that we can employ logic to extract valuable insights. Logic can range from simple threshold-based approaches to more sophisticated machine learning models.

Using Machine Learning for Time Series

Understanding Machine Learning for Time Series Data

Providing an in-depth tutorial on building machine learning solutions for time series data is beyond the scope of this blog. However, we would like to introduce some ideas and concepts on how it can be done.

Machine learning solutions can be seen as a box that takes a data sample as input and provides a prediction as output. In our case, the input will consist of a few samples from the time series (around 5–10 seconds of signal), and the output will be the state of the machine. For simplicity, let's assume two states: "off" and "on," although the solution can be generalized to multiple states. In a production setting, the ML model or algorithm will receive data samples and predict the machine's state. Before this can be achieved, the model needs to be trained to perform the desired task.

Training or building a model is the task of data scientists, who utilize historical data to train the model. There are two main approaches to training a model that we would like to introduce: unsupervised learning and supervised learning.

The unsupervised learning approach, specifically clustering, involves collecting data from sensors and dividing it into small time chunks (typically a few seconds, depending on the machine). The objective is to cluster the data points, ensuring that "off" and "on" samples are grouped into separate clusters based on their statistical features. In clustering, no labeling is required, and the samples are grouped solely based on their characteristics. To assign labels to the clusters, a user may need to annotate a small number of samples, which will be used by the model to determine the cluster labels. In production, when a new data sample is introduced, its distance to the clusters is calculated, and the label of the closest cluster becomes the predicted state of the new sample.

While unsupervised methods are powerful, they may have limitations in accuracy and complexity, particularly for problems with similar machine states. This is why supervised methods are popular since they utilize labeled data and are generally easier to train. To collect labeled data, users are asked to label the machine's state during the data sampling period. Once a few hundred labeled data samples are gathered, a model such as a neural network can be trained to analyze a given data sample and predict the machine's state. The trained model can then be used in production to classify the current state of the machine. On a side note, the labeled data has to reflect the behavior of the machine in production, so If we have a machine that is simple on/off, a few samples are enough. On the contrary, if it goes through different production steps, material changes, or external factors influence the data, it may need a lot of labeled data.

In both approaches, the model is trained using historical data. Unsupervised approaches, like Clustering, do not require labeling but may have lower accuracy and limited complexity. On the other hand, supervised models are easier to train and often achieve superior performance.Labeling, however, may be complex and time consuming.

The Value Proposition

Connectivity is a vital component of successful digital transformation, often embodied on the shop floor by the Industrial Internet of Things (IIoT) concept. This notion revolves around the integration of machines and other entities with the internet, a seemingly simple idea in today's world. However, implementing effective connectivity is a complex endeavor. This is where Tulip steps in, offering a diverse range of connectivity capabilities that enable the collection of machine data through modern software interfaces like APIs and SQL queries, as well as physical connections such as network, USB, serial, and analog interfaces. Additionally, Tulip supports standard protocols like OPC-UA and MQTT, designed specifically to facilitate machine connectivity. These connectors serve as bridges between the physical and digital domains, facilitating seamless data transfer.

Nevertheless, there may be instances where connecting machines proves challenging, necessitating innovative solutions. In such cases, Tulip acts as a versatile toolbox, empowering users to explore and unleash their creativity in the realm of digitalization.

Illustration of Tulip's Router

Learn More

Tulip's EdgeIO

EdgeIO is a device that enables the incorporation of devices as inputs and outputs in your applications, facilitating the creation of triggers and time-saving workflows. It captures events and measurements recorded by machines, devices, and sensors.

Approaching Edge Connectivity

With the aforementioned components in place, three principles come into action with EdgeIO:

  • Openness: Collect data from networked, analog, and proprietary machines using sensors and cameras. Support popular protocols and provide intuitive interfaces to prompt humans for additional data.

  • Agility and Self-Serve: Empower engineers closest to operations to add devices and implement changes without requiring coding or expertise. Tulip Edge Devices are cost-effective, easy to set up, and can be used for multiple use cases.

  • Integrated and Connected: Create intuitive, streamlined workflows that automatically collect data and offer real-time guidance. Tulip can be integrated with other systems, connecting to HTTP APIs, SQL databases, and OPC UA servers.

NodeRED

With these sensors in place, frontline operations platforms like Tulip can be incredibly useful. Tulip offers EdgeIO, a connectivity device that easily connects to these sensors. The EdgeIO device can run Node-RED, a flow-based development tool for wiring together hardware devices, APIs, and online services in the IoT realm. Node-RED provides a web browser-based flow editor, allowing users to create JavaScript functions. By utilizing Node-RED, the sensors, and the EdgeIO device, we can collect real-time, continuous time series data. This data from the machine can be stored in a Tulip table and used to train a machine learning model. And we can do this without writing a single line of code.

Illustration of Edge Computing

Enable manufacturing excellence with Tulip

Learn how you can empower workers and gain real-time visibility with apps that connect the people, machines, and sensors across your operations.

Day in the life CTA illustration