Our automation group here at Northern Lights Solutions, has designed and built dozens of machines over the years controlled by PLC’s that record data, display information and make decisions about the quality of the products that have been manufactured.  The machine we are working on now is different.  This machine will be connected to the internet and will store data on servers either at the manufacturer or on the cloud somewhere.  We are creating the much talked about Cyber Physical Automation Systems of Industry 4.0 and the Big Data that it creates.

This project started a discussion about how big should the data file be.  How much sampling is too much sampling because of how it would affect the data file size?  How to identify the data so that it can be efficiently retrieved years from now?  How can we use this data to make upstream manufacturing decisions?  How can we use this data to make downstream manufacturing decisions?  Should we, as the OEM equipment manufacturer have access to this data so we can advise the company if something is varying from what we consider the normal specification and do we even want this responsibility?  How much I/O should we have available to upgrade our machine to speak to other machines in the future as this technology is spread from one machine to another?

Big Data became the other issue we started to think about as well.  If our machine is generating 75 kilobits of data for every test cycle and there are thousands of test cycles a year there are gigabits of data created.  With hundreds of machines in a typical manufacturing facility, there are going to be an incredible amount of data created and stored somewhere.  What are we going to do with all this data?  How are we going to use this data to improve processes?

As with most new things in business, we are now left to have team meetings and create flow charts and write new procedures.

Leave a comment

Your email address will not be published. Required fields are marked *