Online vibration analysis #Post6


Dear OpenAdaptronik blog readers,

In this post I am gonna talk about defining vocabulary, assumtion about data uploader knowledge and a little bit about data upload process.

There are four main actions that could be taken by a user on the platform and those are:

  1. Search from available data
  2. Upload measurement data
  3. Search for analysis tool and use it
  4. Connect analysis tool using platform defined API


Measurement data and Experiment (Vocabulary and definition):

Defining necessary and logical vocabulary is also necessary for the platform since it is one of its firsts and there is no such platform with predefined vocabulary that is a general understanding for all user. Therefore, here is a start. Since we talked about the 4 main actions, there could be some micro actions such as grouping of measurement data.By definition measurement enables us to take quantitative measures, it allows us to acquire a physical quantity (acceleration, sound, pressure,etc). Sometimes multiple measurements are needed in order to successfully in order to solve a problem.

Therefore, grouping multiple measurements into a single group gives user the opportunity to classify their data sensibly on the platform as well as it helps categorize and clean up the whole data space. After much research and interviewing multiple experienced engineers, the vocabulary for such group is defined as an Experiment. The idea is to let user can upload their measurement data without having to think about any further details. However, in the future, if needed, a user can grouped together their measurement data and can name their experiment.

Experiment will contain acquired data (uploaded data) as well as analysed data (generated by running analysis on the acquired data).


Discussion on experiment and analysis:

Weather or not the measurement group should be called experiment was a big topic of debate. The interchanging understanding of the concept ‘experimenting (Experimentieren) with your data’ and ‘analysing (Analysieren) your data’ had to play a big role in that.

The fact is we measure and analyse our data in order to get a better understanding of a problem and we call it an experiment. So, in shorter way of explaining that is, we experimented (measured and analysed) to get a better understanding of the problem. We picked the next best verb ‘Analyze’ to describe the action that could be taken on the data in order to get a better understanding. The means for such analysis are then called analysis tools. For an example, having time domain data and using an analysis tool such as FFT to convert that into frequency domain.


Data upload and and knowledge of the data uploader about the data:

Now, that we have set the definition, let’s talk about data upload process. Since the platform is created by keeping Makers and SMEs in mind, it is important to clarify that not every curious makers or a small business owner can have industrial measurement devices in order to collect and analyse their data. Big industrial systems, such as Siemens LMS system, can collect measurement data using as many as 16 channels (sensors) as well as can already produce some analysis such a FFT on the acquired data. In industry 4.0 operational technology (OT) is still far from being standardized. Therefore, to let user upload their collected data using simple sensor systems (such as a smartphone and an app or single sensor), the platform will allow user to upload a csv and txt files and uff 58(unv) for those who has the means to upload such file format.

The file type uff 58 contains the most amount of information and could be considered as a complete data set for mechanical vibration. Based on the uff58 dataset, we can create a data model, now in case a user doesn’t have the means to upload a uff 58, and can therefore only upload a .txt or .csv.

However, the goal is to check what is missing from the data that is coming from txt or csv.

Those are the information that needs to be collected from the data uploader and then we can store it in the database. In our case it is mongoDB, I already explained in this post the reasons for using MongoDB.

There are certain information, such as response node,response direction and reference node, reference direction, are also very useful for advanced analysis such as experimental modal analysis (EMA). Those meta-information needed to be entered by an user only when they are trying to upload either a txt or a csv file, since the uff already has that information included in the file.

First, there is a text field, where user can enter a number (for both response and reference node) and the other one would be a dropdown (for both response and reference direction) field with values to pick from. In case the data uploader is unaware of those information, it could all be set to 0 and the user could proceed with data upload process.


Stay tuned to know more about the platform!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.