HGL Software Architecture
In order to understand how to use the HGL System, it is important to get a quick overview of the different applications and servers that can be part of the system.

The HGL Architecture - both hardware and software - is reliant on two main functions: an active network for communications and permission to communicate. All of HGL's software relies on a Client - Server relationship between software modules, this requires that the Server application be free to multicast its messages on the network. The Hardware also uses multicasts to announce that it is present and awake to the various software modules.
Networks

The GREEN network is always installed campus wide. All devices that record and analyze data expect to have access to a GREEN network so that data may be properly transferred to the central data repository (Hercules) for archival onto tape. Additionally, users require data to be analyzed following a test or completion of a critical test point. This also makes use of the GREEN network so that a user that at a different physical location has access to the data for detailed analysis and investigation.
The BLACK network is a completely private network between the Dragonfly Analog to Digital front end converter devices and the respective host acquisition computer (Eagle PC or Streaming PC). All front end hardware devices use static IP Addresses and this is expected to be a common IP Address range for all acquisition computers allowing any Dragonfly device to be paired with any Eagle PC.
As seen in the Test Cell below, the same color-coded lines represent the network names for the different functions of the test cell.
Test Cell
A Test Cell is HGL's name for a logical grouping of components (Hardware and Software) that comprise all the necessary functions for data acquisition, control, real time monitoring, analysis, and archiving for a test configuration. This is meant to represent a real, physical, test stand setup.
The test cell concept allows the various software modules to be easily distributed to multiple different computers for parallel data processing. For instance, multiple PCs can be used to receive and perform real time analysis on digitized data streams from a Hummingbird or multiple PCs can be used as test monitoring stations for viewing real time data as it is processed. When a device is added to the test cell, then it receives all the necessary configuration information for communication with the other members of the test cell. There is no fundamental restriction on the number of roles that a single computer can functionally operate with more than one software module.

Each icon represents a different role within the Test Cell - and these are defined by SOFTWARE, but the software can be distributed across several different computers. The computers can vary in capability and include laptops, embedded PCs, rackmount servers, and desktops.
This article explains how to setup a test cell.
The main servers within an HGL system include the Remote DB (Database Server), Hawkeye Server, Hawk Controller (Test Cell Server), Aurora Servers (Post Processing / Data Analysis Servers), and Hercules Server (Data Archive). Within each major group, there will be other sub-servers - for example, the Hawk Controller manages multiple Hawks which collect and process data from the ADC front ends (analog or digital).
Remote DB
RemoteDB is the only HGL Application that directly communicates with the databases. All programs that access data in the database use calls to Remote DB. The HGL Database organizes and tracks all files that are associated with an HGL System. It DOES NOT actually store test data, just the name of the files and the storage hierarchy of a given file
The Database functions as a list of information of the data that has been recorded, the data files associated with the recordings (and the PCs or other media where the data is stored), and the archive status of the data files.
The systems that need a database are any servers that manage the storage of data, specifically, a Hawk Controller PC, a Hercules PC, or an Analysis PC (Aurora or otherwise).
Hawk Controller
Hawk Controller has a minimal user interface and is a backend server for interfacing with the ADC Hardware. The Hawk Controller is the basis of the logical Test Cell that is used to group all the members of a test setup together. Hawk Controller ensures that all Hawks are synchronized together, sending the test configuration to all attached devices, commands the test cell member to change state (idle, configured, scanning, writing, manoeuvre), and verifies the status of all members of the Test Cell.
The Hawk Controller executable will often run be a single VESA-mount PC also running the Hawk GUI and Remote DB applications.
Hawk
Hawk is responsible for controlling each locally assigned ADC device (up to 64 channels, typically). This includes communicating the hardware sample rate, signal conditioning settings, and detecting features of attached hardware as provided from the configuration. The data streamed from the attached hardware is received in time-stamped ADC counts where Hawk converts it to engineering units and writes the raw channel data to files on disk. All data is shared with the Real Time Services for processing data to different formats via a common memory pool. Examples include the FFT, Speed, or DAS Output services that provide processed data streams to different clients such as Hawkeye Server, or an external receiver.
Hawk is often a dedicated PC running the Hawk and suite of Real Time services in the test cell.
Hawkeye Server & Hawkeye Client
Hawkeye Server parses information from the Real Time Services to the Hawkeye Display clients. Within a single Test Cell, there is typically only a single Hawkeye Server. In addition to providing clients with the data stream requested, Hawkeye Server is responsible for comparing all real time processed data against the defined alarms for the configuration.
Hawkeye Client displays real time data that the specific HE Client is allowed to see (channel groups and restrictions are defined in the Hawk GUI). All HE Clients are able to view any channel independent of what other HE Clients are showing. The restriction is that some channel processing settings will be applied to all channels (e.g. down sampling). Real time displays include FFT, Scopes, Strip Charts, and spectral plots.
Hawkeye Server floats to different computer platforms depending on the size of the system. Hawkeye Server can be a stand-alone VESA PC, or installed on a Hawk or with a dedicated Hawkeye display client. Hawkeye Client is commonly installed on low power variant of the VESA PC and multiple Hawkeye Clients can be assigned to a single test cell.
Aurora Servers
There are multiple post processing servers used in the HGL Software Suites and all perform different tasks on the existing, recorded, data on the system. The Aurora Servers run at a central location where all data is stored. Separate Aurora Clients can be installed on distributed computers and connect to the servers over the network giving access to all data for post processing and post test data review.
Offline data processing gives access to looking at the raw waveform, frequency domain processing, time domain processed files, and batch processing tasks (to create frequency or time domain processed files). Once data has been reviewed, it can be exported as a CSV (or other formats from the raw files) for use with external processing tools.
The main Aurora Servers include Dataviewer Server, Rawviewer Server, and Manoeuvre Creator, and Queue Manager. All of the Aurora Servers are typically installed together on a single, high-spec computer. The Aurora Servers may also be run on a Hercules PC. There is a single client for accessing all the post-processing servers called Aurora Client and it is often run from dual-purpose computers (a laptop or dedicated analysis work station).
Hercules Data Archiving
The suite of Hercules software is focused on securely transferring recorded data from multiple test stands to a centralized location and archiving it to a traditional tape media ( currently LTO), on-site NAS, or external hard disk media. Hercules will query all on-line test cell databases for files that need to be transferred. Any new files are automatically transferred to the Hercules using File Transfer Manager and the test cell database is updated to reflect the transfer. Once on the Hercules, the new data files will be queued up for archiving to tape. If users need access to any archived data, using the appropriate client they can browse the Database Tree, select the manoeuvre of interest and Hercules will automatically restore the files.
Wrap Up
This is just an overview of the major functions within the HGL System ecosystem. For additional information, look at the linked articles below.