@@ -8,49 +8,32 @@ The guide is divided into the following sections:
+[Analysis](####analysis)
#### Hardware
We are currently running a two-crate system. This means we have two crates, each equipped with one VME computer (__RIO4__) and one trigger module (__VULOM4b__). A desktop computer (__Polarbear__) acts the final DAQ-computer for the two VME computers, and all user interaction goes through this computer. Notice that both VME computers boot from __Polarbear__.
The DAQ is running on a VME computer (__MOT2__) and a trigger module (__VULOM4b__). The VME computer boots via ethernet from a laptop (__Bumblebee__), which also acts as the final DAQ computer. All user interaction goes through __Bumblebee__.
One of the crates acts as the master. That means it works more or lessas a single-crate configuration, but it also answers to the BUSY of the slave etc. This crate handles all the advanced triggering logic, and this is where we can input triggers from external sources (shapers, etc).
The VME computer handles the readout of the data from the modules.
The other crate acts as a slave. The slave's trigger configuration is extremely simple, it basically tells the master when it is busy, and issues GATE signals when the master tells it to.
The VME computer in each crate handles the readout of the data from the modules. Since we have two crates, we use an event builder to combine the data from both computer to valid events. The event builder attaches to both the master and the slave and recieves the data from the readout software on each node. This event builder runs on __Polarbear__.
The VME computer in the master crate will be referred to as __RIO7__ (7 is due to it's IP address) and the VME computer in the slave crate is referred to as __RIO2__ (again, due to IP address).
Both VME computers boot from the same NFS-filesystem on __Polarbear__ (via the ethernet connection), and one will therefore encounter the same files when accessing either VME computer.
__Polarbear__ can be acessed with `acquser`. You can access it remotely with from any computer on a CERN network.
__Bumblebee__ can be acessed with `is633`. You can access it remotely with from any computer through ssh
```bash
ssh acquser@polarbear
# IF you are logged into sec@pcepisdaq6
ssh pbear
ssh is633@bumblebee
```
One can access the VME computers from __Polarbear__ with `ssh`:
One can access the VME computer from __Bumblebee__ with `ssh`:
```bash
ssh rio2
ssh rio7
ssh mot2
```
Below is a picture of the rack with the crates as well as __Polarbear__:
We use a rather broad palette of different programs and libraries to make everything work. In order to not have worry to much about the individual parts, we have written __DAQC__, which handles all the different parts and provides a user-friendly interface for control and monitoring. The following list is an extremely brief description of the different components. Most of the time, however, all you need to worry about is __DAQC__, which will be described in next section.
-`drasi`: The data acquisition (data pump) that handles the data transport aquired from our modules. It allows the use of a so called `f_user.c`, which is a piece of C code that handles the actual readout of the modules. The `f_user.c` is custom made. We run 3 drasi instances: one on each VME computer and an event builder on __Polarbear__.
-`drasi`: The data acquisition (data pump) that handles the data transport aquired from our modules. It allows the use of a so called `f_user.c`, which is a piece of C code that handles the actual readout of the modules. The `f_user.c` is custom made. We run 3 drasi instances: one on each VME computer and an event builder on __Bumblebee__.
-`nurdlib`: The VME library that knows the data layout of a wide variety of modules and how to do an actual data readout. We use this to perform the tasks of our `f_user.c`.
-`trlo II`: Firmware for the VOLUM-models. We use this to program our trigger logic.
-`ucesb`: Data unpacking software. We use it mainly for two purposes. It is used to attach to the data stream from the event builder and save the data to disk. It is also used to attach to the data stream from the event builder and fan it out to multiple online analysis programs. A major advantage of `ucesb` is that it checks that the data is not corrupt online.
-`go4`: Online analysis. We have interfaced this with `ucesb`, a project called `go4cesb`, which we use to run a `go4` server. We can then attach multiple `go4` GUI clients.
-`DAQC`: Data acquisition controller program, that manages all the different pieces of software that makes the DAQ run.
To stress it again - you will mainly need to worry about __DAQC__ and __Go4__ which runs on __Polarbear__.
To stress it again - you will mainly need to worry about __DAQC__ and __Go4__ which runs on __Bumblebee__.
## Prerequisites
...
...
@@ -62,7 +45,7 @@ In principle anyone can run the DAQ. We have tried to make it as user-friendly a
- Use `ping` to test network connections.
- Run `python` scripts. Also look into `virtualenv`'s.
- Use `scp`, `sftp` or `rsync` to copy files between computers
- Use a text editor to edit files. There are GUI options such as `gedit` on __Polarbear__, but it may be worthwhile to learn some basic usage of a terminal based editor such as `vi`, `vim`, `emacs` or `nano`. This is for instance our only option on the VME-computers.
- Use a text editor to edit files. There are GUI options such as `gedit` on __Bumblebee__, but it may be worthwhile to learn some basic usage of a terminal based editor such as `vi`, `vim`, `emacs` or `nano`. This is for instance our only option on the VME-computers.
@@ -81,15 +64,15 @@ Generally speaking, the color codes are:
- Red: Something is wrong,
- Grey: The service is disabled.
In the __FAQ__ you can learn how to start __DAQC__. So let's assume you have started it up. __DAQC__ provides a pretty web frontend on port `8080`. This means that you can open a browser on any computer on the CERN network (including __Polarbear__ ) and type `polarbear:8080`.
In the __FAQ__ you can learn how to start __DAQC__. So let's assume you have started it up. __DAQC__ provides a pretty web frontend on port `8080`. This means that you can open a browser on any computer on the CERN network (including __Bumblebee__ ) and type `polarbear:8080`.
Notice, that even though you can access it from anywhere on the CERN network, you need the credentials to be able to change anything.
The web frontend will show you something like the following:
1. __VME computers*__ (__RIO2__, __RIO7__): This box shows you whether the VME computers are online, i.e. if they can be reached from __Polarbear__.
2. __Readout*__: This box shows the status of the readout, i.e. the event builder on __Polarbear__ and the two `drasi` instances running on the two VME-computers. The `2/2` means that both nodes are running. The event builder broadcasts the data such that we can attach onto it and save it to a file.
1. __VME computers*__ (__RIO2__, __RIO7__): This box shows you whether the VME computers are online, i.e. if they can be reached from __Bumblebee__.
2. __Readout*__: This box shows the status of the readout, i.e. the event builder on __Bumblebee__ and the two `drasi` instances running on the two VME-computers. The `2/2` means that both nodes are running. The event builder broadcasts the data such that we can attach onto it and save it to a file.
3. __Relay__: The relay is a `ucesb` instance that attaches to the event builder's data stream and fans it out to several data streams. This also validates the data, i.e. if it crashes, the data may be corrupt. The relay provides data to, among others, the online analysis. So if the relay dies, so does the online analysis. The relay is *NOT* responsible for the data taking or recording, and is thus non-essential.
4. __Go4__: The online analysis. We run the online analysis as a server. That means that we do not NEED a GUI. Of course, to see anything, we need a GUI. However, by running the analysis as a server, we can attach multiple GUI instances. Moreover, if the GUI crashes, the online analysis process do not necessarily crash. In this way, we can also access Go4's web version in a browser. Open up a browser and type `polarbear:5000`, and you will see the online analysis (if it is running, of course).
Notice that __DAQC__ only know about the server, and not about any of the GUI instances. So if they die, simply reopen them.
...
...
@@ -112,7 +95,7 @@ The web frontend will show you something like the following:
```
## Go4
To see the online analysis, you can open the dedicated GUI program GO4 on __Polarbear__, by just typing `go4` in a terminal. If you want to watch the online analysis from a remote computer on the CERN network, you can open a browser and type `polarbear:5000`. This will open a web version of the online analysis.
To see the online analysis, you can open the dedicated GUI program GO4 on __Bumblebee__, by just typing `go4` in a terminal. If you want to watch the online analysis from a remote computer on the CERN network, you can open a browser and type `polarbear:5000`. This will open a web version of the online analysis.
The GUI looks like this
...
...
@@ -193,7 +176,7 @@ This section will give you the step-by-step guide to how to start up the acqusit
6. __Check that the data aqusition is running__: We can now go to the __DAQC__ frontend to monitor everything. Go to a browser and enter `polarbear:8080`. You should now see __DAQC__. Everything should be green. That means we are happy.
If everything does not turn green within a minute or so, try to restart __DAQC__. This may, in particular, be required if only the crates died, but __DAQC__ kept running on __Polarbear__.
If everything does not turn green within a minute or so, try to restart __DAQC__. This may, in particular, be required if only the crates died, but __DAQC__ kept running on __Bumblebee__.
7. __Check that the shaper settings are loaded__: If the NIM-crates' power was off, we should also load the settings for these. This assumes you have saved the settings in `settings.json`. Consult an elog or someone who has been setting up the shapers, which file you should be using. Open a terminal and issue
```bash
...
...
@@ -264,7 +247,7 @@ This section will give you the step-by-step guide to how to start up the acqusit
- __Add synchronization destinations__:
__DAQC__ can automatically copy the raw files and log files to a remote destination. In order to do this, you must do two things:
1. Make sure that you can `ssh` into the remote without any password, i.e. setup key-based authentication using the public key from __Polarbear__ (`~/.ssh/id_rsa.pub`).
1. Make sure that you can `ssh` into the remote without any password, i.e. setup key-based authentication using the public key from __Bumblebee__ (`~/.ssh/id_rsa.pub`).
2. Go the `~/DAQC/settings.yml` and edit the `sync` part to include your remote(s). Say you have two remotes called `myserver1.mydomain.org` and `myserver2.mydomain.org` with users `myuser1` and `myuser2`. To sync the data to the directory called `/remotedata`, edit `settings.yml` to include
```yaml
...
...
@@ -317,7 +300,7 @@ channels
in any terminal to get an overview of the rates is all ADC's and TDC's. It looks something like this:
Since the DAQ consists of so many moving parts, a lot can go wrong. However, a lot of these moving parts are non-essential, i.e. data can still be taken. The readout itself is very, very stable, i.e the two __RIO__'s and the event builder on __Polarbear__. We have yet to see a proper crash of these.
Since the DAQ consists of so many moving parts, a lot can go wrong. However, a lot of these moving parts are non-essential, i.e. data can still be taken. The readout itself is very, very stable, i.e the two __RIO__'s and the event builder on __Bumblebee__. We have yet to see a proper crash of these.
What is much more likely to cause problems, and probably will during every experiment is __Go4__. Therefore, this should be the first suspicion.