|
|
|
### AUSALIB
|
|
|
|
|
|
|
|
This is the home of the Aarhus Subatomic Library (AUSALIB) whose purpose is to facilitate the analysis of experimental data. To get started with AUSALIB, see the
|
|
|
|
|
|
|
|
* **[AUSALIB manual](Manual)**
|
|
|
|
* **[AUSALIB tutorial](tutorial)**
|
|
|
|
|
|
|
|
There are a number of other projects hosted at GitLab which interact with AUSALIB. A brief overview of the data analysis procedure is given below. For complete AUSALIB documentation see http://docs.kern.phys.au.dk/. For graphical overview see bottom of this page.
|
|
|
|
|
|
|
|
***
|
|
|
|
### Unpacking
|
|
|
|
|
|
|
|
Experimental data is read out from the acquisition system in a binary format and dumped to disk without any further processing. At this point the data are arranged as pairs of values corresponding to a VME address and a value (proportional to energy or time etc.). Further scaler values are also written to the file along with each event recorded.
|
|
|
|
|
|
|
|
The first stage of data analysis is to convert this data from referencing VME addresses to referencing the detectors in the setup. This step is referred to as unpacking the data. For this purpose we have an **[unpacker](https://gitlab.au.dk/ausa/ucesb/-/wikis/home)** which requires as input a .spec file specifying the mapping between VME addresses and physical detector channels. Sensible names for the scaler values can also be assigned at this point.
|
|
|
|
|
|
|
|
The output from the unpacking procedure is a ROOT file, where each entry corresponds to a trigger event from the acquisition. At this point the data may already be inspected using ROOT, however the detector values (energy and time) have no calibrations applied.
|
|
|
|
|
|
|
|
***
|
|
|
|
### Sorting
|
|
|
|
|
|
|
|
The purpose of the **[sorting](https://gitlab.au.dk/ausa/sorter/-/wikis/home)** is to take the unpacked ROOT file created by the unpacker and create a new ROOT file. During this process calibrations can be applied, timing signals can be aligned (work in progress), front-back matching can be performed on DSSD detectors and addback on clover detectors applied (not implemented yet). At the end of this stage the data should consist of well defined particle signals (although the identity of the particles may not be known at this point).
|
|
|
|
|
|
|
|
This step makes use of a 'matcher', which specifies for each detector in the setup what is required for a good particle signal. For example, in a DSSD detector we typically require a signal from both front and back strips with similar energy. We may also ignore signals with energies below or above certain thresholds. In some cases it may also be desired to ignore completely certain detector segments due to high rates/noise/electronic problems etc. The matchers to be used should be specified in a .json file, which can be read in by the sorter.
|
|
|
|
|
|
|
|
The sorter also requires information about the detector setup, which again should be provided in a .json file. Here the detector properties (energy calibrations, dead layers, geometries etc.) are specified. Each detector is also associated with the correct branches in the unpacked ROOT file.
|
|
|
|
|
|
|
|
***
|
|
|
|
### Analysis
|
|
|
|
|
|
|
|
The final stages of the analysis involve the extraction of physical information from the particle signals contained within the sorted ROOT file. AUSALIB contains functionality to assist with this in the form of **[analysers](Analyzers)**.
|
|
|
|
|
|
|
|
The advantage of this approach is that the sorted root file may be read in together with the detector setup file. This gives access to the built in functionality of AUSALIB, which includes methods for calculating [energy losses](https://gitlab.au.dk/ausa/eloss/-/wikis/home), detector geometries etc. This approach also permits the setup to be defined once and then used throughout the entire analysis.
|
|
|
|
|
|
|
|
### *NEW*
|
|
|
|
|
|
|
|
*Starting from October 2016 we are splitting the analysis into two steps: 1) Identification and 2) Event building. In step 1) we identify particles and correct for energy loss in dead layers and in the target, while in step 2) we construct physical events, including reconstruction of missing particles, application of various cuts (e.g. energy and momentum conservation) and kinematic fitting. The philosophy behind this two-step approach has been to separate the basic analysis (which can be applied to hits individually) from the more complex analysis (which requires considering the event as a whole). Note that often it may not be possible to assign a definite particle ID in step 1). In such cases, all possible IDs will be saved and the final decision must be made in step 2).*
|
|
|
|
|
|
|
|
***
|
|
|
|
### On-line
|
|
|
|
|
|
|
|
Sorting and Analysis is typically done off-line after the experiment. For on-line monitoring of incoming data we tend to use **[go4cesb](https://gitlab.au.dk/ausa/go4cesb/-/wikis/home)**.
|
|
|
|
|
|
|
|
***
|
|
|
|
### Simulation
|
|
|
|
|
|
|
|
The program [simX](https://gitlab.au.dk/ausa/simx/-/wikis/home) (work in progress) allows you to simulate the kinematics of nuclear reactions and decays, based on a sequential model, and the response of the detection system to the emitted particles. Experimental effects, such as energy loss in dead layers, angular straggling, trigger logic and thresholds, can be taken into account. The simulated data is saved to a ROOT file with the same structure as the experimental data, allowing the simulated and experimental data to be passed through exactly the same sorting and analysis process. simX is a useful tool both prior and posterior to an experiment; it allows you to optimize the detector set-up and aids the data analysis, e.g. by allowing precise determination of detection efficiencies.
|
|
|
|
|
|
|
|
*** |