Data Structure and Organization
Within the HGL software design, there are a few concepts which must be discussed in order to get a handle on how the software works. This section is to provide an overview of how an ideal HGL system would operate. As mentioned earlier, the entire software suite operates using database at the heart and core of all the applications. In this, there is a hierarchy which needs to be learned.
Database
Underlying the entire file system for HGL’s software is a database that is designed to keep track of all thefiles that have been created, where those files reside, their archiving status, and which test those files are associate with. The database itself does not hold any of the actual data files, it is purely a reference system.
Each system is composed of a master database and one or multiple media databases. The master database organizes the overall structure of all testing that has occurred (i.e. configuration names, various recordings, and media (HDD, tapes, etc) where data from each recording is located). The media database only maintains a record of the files that are stored at that particular location and the status of each file (authorized, archived, not archived, not found, etc). By doing this, the databases can easily contain millions of records while remaining compact in size.
The ultimate goal with our database is to track every file connected to the system, to know which test the data file is associated with, and to ensure that all data is safely archived. As files are created and moved from one storage location to the next, it is crucial to ensure that every file is properly accounted for and not deleted prior to being safely transferred or archived. When used with the HGL Hercules Data Management suite, files are automatically transferred as they are created to a final storage location. When each local storage disk reaches capacity the system will automatically delete any files that have been marked as having been transferred. By using this method, HGL is able to create data acquisition systems that can efficiently record high bandwidth data continuously for hundreds of hours without requiring manual interaction with the system.
Data Hierarchy
All recorded data is organized based on the following criteria:

- Engine ->
- Test ->
- Configuration
- Test ->
The organizational structure is used to make finding data from a specific test simpler. Instead of trying to track all of the specific data files created from a historical test, or the directory (or machine) where those files are stored, we present the user with a database tree view of all past tests. Once the user identifies the specific Engine/Test/Configuration of interest, all of the data files associated with that test can be reviewed, exported, or post processed regardless of the actual file location.
Data is associated with an “Engine” because of HGL’s origins as a data acquisition company within the aerospace industry. However, multiple languages are allowed within the software, so, users can request that a language be created to take into account naming conventions specific them. For instance, customers have used “Program” or “Project” in place of the name "Engine". This doesn’t change that the database still uses the category of an “Engine”, but simply makes the software interface more appropriate for a given customer.
To setup for a data recording, a new configuration is defined to be used with an “Engine” on a specific “Test”. In the Hawk GUI, a new engine is defined by going to the “Preferences” section and selecting the “Engines” tab. Once the configuration is saved, the channels and calibration values can be defined for the configuration. Verify that all the necessary acquisition hardware is available on the “System Settings” page.
Data Recording
HGL has found that it is easier for users to interrogate only the segments of data that need analysis instead of having to navigate file directories or slow sit through long periods of data playback. As many tests can contain several hours of data that is spread over multiple machines, HGL has approached this issue by not giving users direct access to each and every raw file. Instead, the database (and by extension the DB Tree) becomes a management layer between users and the raw files to improve the way users select the data to analyze.

When the HGL system records data, a single file per channel for a pre-determined period of time is created. The benefit of this method means that if any single file (or file set) becomes corrupted, the whole data set is still accessible. Additionally, when compared to large multiplexed data files, data restoration after archiving is faster as only the files of interest need to be restored. All of the raw files from that recording are grouped together and can be accessed through the database tree (Engine/Test/Config). When a user needs to access data, a “manoeuvre” is used to mark the period of time during a recording that needs to be analyzed. The software then accesses all of the necessary raw files that are defined within the specified parameters regardless of the machine on which they were originally recorded. The user is then presented with a single complete set of data that only encapsulates the section of time the user requested.
A manoeuvre is simply a flag in the database that identifies a time period. All manoeuvres have a type, description, time stamp and ID number. Manoeuvres are necessary for any post processing or data export.
Post Processing
HGL offers a range of post processing options and general data management. All raw files are accessible using Aurora for post test review. Any recorded data can be viewed in either it’s raw form and exported to MATLAB; as a post-processed time history file for cycle counting, rainflow analysis, etc; as a post-processed z-mod (a color density spectral plot that can compress a very long manoeuvre into a single comprehensive event); or many other types of analysis which we offer.
The long term archiving software, Hercules, manages the storage and retrieval of any data generated by the system. This is accomplished using minimal interaction by a human operator. As data is generated, Hercules will begin the data transfer from the acquisition system’s on- board storage. As the files are transferred, they will be archived to a medium of choice (LTO tape, external disk drive, or NAS). When the storage space on the acquisition system reaches a user-defined threshold, any data that has been archived will be deleted to make space for new data. If a user comes back at a later date and needs to retrieve data from an archived test, Hercules will automatically restore that data off the archived media and it will be available for processing.
In all of the recording, authorizing, transferring, post-processing, and archiving the user’s only responsibility is to remember: What was the Engine, Test, and Configuration used to record the data? With this knowledge, the user can always find the required data without documenting where it was stored.