Operations
to start the DAQ
- We run the DAQ from "a1.gam" (for now)
ssh -X [email protected]
- you may have to issue this command to get access to the proper fonts for the EPICS displays
xset fp+ /usr/share/X11/fonts/75dpi
- Go to the Gretina DAQ top level directory
gt cd 11-1/gretClust/bin/linux-x86
- Start the GRETINA DAQ MENU:xset fp+ /usr/share/X11/fonts/75dpi
$PWD/godaq
For the patience challenged, a shortcut to the above steps is
xset fp+ /usr/share/X11/fonts/75dpi (cd /global/devel/gretTop/11-1/gretClust/bin/linux-x86; $PWD/godaq)
- The MENU below should show up:
---------- MENU ---------- VME 1 Display 2 Run Control 3 Run ControlBGS 3b Soft IOC 4 Soft IOC2 4b CAEN PS 5 Cluster 6 Cluster Kill 8 Mode 2 9 Mode 3 10 Mode 4 11 RESET daq InBeam 12 RESET daq Gamma 12b StripTool 13 Calibrate 14 EXIT E KILL ALL K
- Info: Exiting the MENU does not exit the DAQ. One can return to the MENU anytime by typing $PWD/godaq
- Start things in the following order:
4, 4b, 2, 3, 6
verify you see "CA Connected" (CA: Channel Access) in the RunControl panel
- Press Button "cluster" in the GlobalControl panel to chose the correct data mode (mode 2: decomposed data, mode 3: raw data (traces)). If GT is not set up already for mode2 data taking (producing decomposed data) you should invoke option 9.
Note: In the Gretina RunControl panel, 'new dir' makes the run starts at 0002, switching back to an existing directory makes the run start at the previous Run number + 2
- Press button "global control"
The upcoming panel is being used for setting global parameters for ALL chrystals (baseline, polarity, etc.)
Raw delay: For internal mode data: 3.7 usec (for analyzing the traces) typically you will see 2.7 usec here
Raw len: length of the trace in usec don't go below 0.2 usec (corresponding to 20 short words, while the header is already 16 words long, so there are 4 words left for the trace itself) in decomposition mode: Use 2.0 usec (corresponding to 16 words for the header and 184 words for the trace)
Threshold setting: typically cc: 1000, segment 300 (do not go below 300 in the segments and 150 in CC) in "user trigger" you can set the crystal readout rate up to 20,000. Lower this number if the decomposition cluster cannot keep up
- Press button "TimeStamps" in the Gretina DAQ top level to check if the timestamps are still synchronized imp sync: starts a train of imperative sync signals, by pressing the button again, the train stops single pulse imp sync: One single imperative sync pulse
- press button "pseudoScalers" in the Gretina DAQ top level to verify that data is coming in and being processed
- rebooting IOCs
ssh gsoper:[email protected] find the number in the "timestamp window". simply count to get the xx value to use.
if you telnet into the IOC and you see tasks suspended you need to reboot. Simply type "reboot"
after a reboot: push the "reset after an IOC reboot" butten and the "TTCS settings button"
to stop the DAQ
to shut down the DAQ
- Type 8 in the MENU to kill the Cluster
- Type K (Kill All) in the MENU to kill the rest
accessing the IOC consoles
ssh gsoper:[email protected]
you will need to know the password to gain access.
online sorting with GEBSort, to a map file
NOTE: you can only receive on-line data if the decomposition cluster is running. The Global Event Builder, where GEBSort gets it data from, is only active if the decompiosition cluster is up and running
Here is an example of running GEBSort on ws3.gam taking data from the GEB to a map file we can look at while we take data
o ssh -X [email protected]
o Download
cd workdir (e.g., /home/gtuser/gebsort) svn co https://svn.anl.gov/repos/GEBSort . or wget http://www.phy.anl.gov/gretina/GEBSort/AAAtar.tgz tar -zxvf AAAtar.tgz
on a1.gam you would need to do this as well
export ROOTSYS=/home/users/tl/root/root-v5-34-00 export PATH=$ROOTSYS/bin:$PATH export LD_LIBRARY_PATH=/home/users/tl/root/root-v5-34-00/lib/root
on ws3 or ws2 you should be fine
o now compile everything
rm curEPICS; ln -s /global/devel/base/R3.14.11 curEPICS make clean make ./mkMap > map.dat rm GTDATA; ln -s ./ GTDATA
o to run and take data, type
# +-- Global Event Builder (GEB) host IP (or simulator) # | + -- Number of events asked for on each read # | | +-- desired data type (0 is all) # | | | +-- timeout (sec) # | | | | ./GEBSort \ -input geb 10.0.1.100 100 0 100.0 \ -mapfile c1.map 200000000 0x9ef6e000 \ -chat GEBSort.chat
you may also just say
go geb_map
o to display the data in the map file
rootn.exe -l compile GSUtils (first line) sload c1.map file (third line) update (fourth line and other places) display
off-line sorting with GEBSort, to a root file
Here is an example of running GEBSort with output to a root file in an off-line situation
o Download the software
cd workdir (e.g., /home/gtuser/gebsort) svn co https://svn.anl.gov/repos/GEBSort . or wget http://www.phy.anl.gov/gretina/GEBSort/AAAtar.tgz tar -zxvf AAAtar.tgz
o now compile everything
make clean make ./mkMap > map.dat rm GTDATA; ln -s ./ GTDATA
Ignore errors about not being able to make 'GEBSort' if you don't have VxWorks and EPICS installed! It will make 'GEBSort_nogeb', which is the version of GEBSort we will use for off-line data sorting where VxWorks and EPICS are not needed (the on-line situation, where you DO need GEBSort, is described above).
o sort your (merged) file as:
./GEBSort_nogeb \ -input disk merged_data.gtd \ -rootfile test.root RECREATE \ -chat GEBSort.chat
note1: for DGS data, GEBSort_nogeb works equally well on the idividual files from gtReceiver4 as the merge of those files created with GEBMerge
o to display the data in the root file
rootn.exe -l (or 'root -l' since it is just a root file) compile GSUtils (first line) dload test.root file (third line) display
FAQ
misc problem fixes and procedures
- To fix a non-responsive timestamp, i.e. just counts and never resets, use command "caput Cry[#]_CS_Fix 1" where [#] is the bank # of the offending timestamp(s). This will reload the FPGA code and reboot the IOC. Any time an IOC is rebooted, you should press the "RESET after an IOC Reboot" button on Global Controls edm screen. Then choose the settings type you want, usually TTCS or Validate settings, on the same screen.
- to select the decomposition nodes that should be runnin (and, thus, avoid any nodes that are broken), edit the files: "setup7modulesRaw.sh", "setupEndToEnd28.sh" in the directory: "/global/devel/gretTop/11-1/gretClust/bin/linux-x86". Then do 8 to kill , then run the scripts you just changed, and 6 to bring up decomposition cluster up again. on dogs, type 'wwtop', to see details of what nodes are running.
- to bring up the GT repair buttons: "~/gretinabuttons.tcl". you may have to do that from a1.gam for now as we have a problem doing some things from the linux WS machine;
Calibration Procedure
NOTE: these instructions were lifted from NSCL, we will have to modify them to the needs at ANL
In broad strokes, the procedure has been to fit source spectra for the central contacts (CC), and then calibrate the segments based on the observed energy in a segment vs. the calibrated CC energy for multiplicity 1 events. There are some simple, and fairly quick unpacking codes which make the necessary ROOT spectra, and then some ROOT macros that actually perform the fitting etc. The procedure is outlined below, and the codes are all located in /home/users/gretina/Calibration/.
CC Calibration
1. Unpack the calibration data (this all assumes it's mode3 data) using the code ./UnpackRaw, where the commands are
./UnpackRaw <Run#, i.e. 0061>
2. Run the macro to fit the peaks:
root -l root [0] .L FitSpectra.C root [1] FitCCSpectra(.root Filename, i.e. "MSUMay2012Run0060.root", value for xmin, typically choose 100, Boolean for manual checking, to check, set to 1, otherwise 0, source type, i.e. "Eu152");
-- Here the inputs are the .root filename (i.e. "MSUMay2012Run0061.root") which was generated in the unpacking step, a minimum spectrum value to look at (not so important, just pick something low, like 100 or something), a boolean to say if you want to personally approve the fits (CHECK = 1), or just let it go (CHECK = 0), and the source type (options are "Co60", "Bi207", "Eu152", "Co56" and "Ra226" right now).
-- If you run this script with CHECK == 1, you run in a quasi-interactive mode. That is, for each CC, you'll get a chance to look at all of the fits, and to move to the next CC, you must accept the fits by hitting 'y', and then enter. If you see one you don't like, you can enter the peak number, and the fit will be redone, with tweaked parameters. Then you can either accept the peak, or reject it, in which case it's ignored in fitting the calibration. Peaks are numbered in order of increasing energy, and labeled in the ROOT canvas, so you know which number to type for a given fit. The only thing to note in this is that if peaks are < 20 keV apart, they are fit together. In this case, if peaks 7 + 8 are fit together, there will only be a spectrum shown in the position corresponding to peak 7. If you're unhappy with the fit though, and refit 7, it will only fit peak 7. If you want peak 8 retried as well, you need to specify this additionally.
-- Regardless of the value of CHECK, if the code can't find a peak, it'll ask you if you see one. If you do, you can enter the approximate channel number, and a fit will be attempted. If you don't, entering -1 will just leave out the calibration point.
-- Output from the macro:
paramCC<ROOTFilename>.dat (i.e. paramCCMSUMay2012Run0061.dat) -- contains all fit parameters meanCC<ROOTFilename>.dat (i.e. meanCCMSUMay2012Run0061.dat) -- contains just the centroid of the fits
3. After you have fit all the source spectra you're interested in, you can fit the calibration with the ROOT macro:
root -l root [0] .L FitCalibration.C root [1] FitCalibration(number of calibration files, i.e. 3, First .root file, i.e. "MSUMay2012Run0060.root", Second .root file, i.e. "MSUMay2012Run0071.root", Third .root file, i.e. "MSUMay2012Run0070.root", Fourth .root file, i.e. "" if only using three files, Flag for CCONLY = 1 if only fitting CC, Flag for making detector maps, DETMAPS = 0 for NO);
-- This literally fits the calibration points with a straight line, and calculates the residuals, etc. of the fit. Input to the macro is the number of files you want to use in the calibration (number), the .root filenames (i.e. "MSUMay2012Run0061.root"), a boolean to say if you only want to fit the CC (answer should be yes, so CCONLY = 1), and a boolean to see if you want to make the detector map files for decomposition (at this stage with only CC calibrations, no). Note, if you use less than 4 calibration files, for the unused filenames, just put "".
-- Output from the macro:
fitFile<ROOTFilename#1>.root (i.e. fitFileMSUMay2012Run0061.root) -- contains the calibration fit, and residual plots finalCalOutput<ROOTFilename#1>.dat (i.e. finalCalOutputMSUMay2012Run0061.dat) -- contains three columns - ID, offset and slope This is the file I rename (EhiGainCor-CC-MMDDYYYY.dat) and use for CC calibration in unpacking codes. finalDiffOutput<ROOTFilename#1>.dat (i.e. finalDiffOutputMSUMay2012Run0061.dat) -- contains residual information in text format
Segment Calibration
1. Edit the unpack-m1.C source code (in the subdirectory src/), and change the line that reads in the calibration to read the new CC calibration that you've created.
RdGeCalFile("<Name of CC calibration - i.e. finalFitFile<>.dat>", ehiGeOffset, ehiGeGain);
2. Remake the code, just type make. This replaces the executable in the directory above.
3. Run the code UnpackM1 on the data you're using for the segment calibration. Typically, pick something with statistics enough to see something in the back segments.
./UnpackMult1 <Directory, i.e. MSUMay2012> <Run Number, i.e. 0061>
-- This code takes a while, because it scans the file 28 times, to calibrate each crystal one at a time. This is because it's using 2D ROOT matrices, and I can't make the memory usage work properly to do multiple crystals at the same time. It always has a memory allocation problem. If someone knows how to fix this, by all means, please do. Let Heather know how you fix it too =)
-- The code is essentially sorting segment multiplicity 1 events, and using the calibrated CC to calibrate the segment energies. Right now, it's looking at the 5MeV CC to do this. You can change this in the code, if desired. At the end of the code, it then fits the plot of CC vs. segE, and extracts a slope and offset. This is actually then written out as a completed calibration file (see output information).
-- Output files from code:
<ROOTFilename>Crystal##.root where ## goes from 1 to 28 -- these are .root files containing the 2D plots for each crystal, as well as projections <ROOTFilename>.slope -- contains channel ID (electronics ID), offset and slopes -- the finished calibration This is the file that is usually renamed to EhiGainCor-MMDDYYY.dat and used in codes.
Detector Map and Trace Gain Generation for Decomp
1. Edit the ROOT macro file FitCalibration.C, specifically the function MakeDetectorMaps(), to read in the appropriate calibration file. Also edit the destination directory for the generated detector map files, which appears in a few places...
calibrationFile = fopen("<CalibrationFilename.slope>", "r"); ... system("mkdir <DestinationDirectoryName>"); ... detmap_file = fopen(Form(<DestinationDirectoryName>/detmap_Q%dpos%d_CC%d.txt", quad, position, ccnum), "w"); ... tracegainfile = fopen(Form(<DestinationDirectoryName>/tr_gain_Q%dpos%d_CC%d.txt", quad, position, ccnum), "w");
2. Run the MakeDetectorMaps() macro.
root -l root[0] .L FitCalibration.C root[1] MakeDetectorMaps()
3. Detector map and trace gain files should be automatically generated in the directory specified within the macro.
Statistics required for calibration
* 56Co, mode 3, validate, trace 0.2us --- 8 GB = ~15 minutes * 152Eu, mode 3, validate, trace 0.2us --- 6 GB = ~15 minutes * 226Ra, mode 3, validate, trace 0.2us --- 8 GB = ~15 minutes
Statistics required for checking segments resolution
* 60Co (10 uC), mode 3, validate, trace 0.2us --- 30 GB = ~60 minutes --- 500 cts per segments of the rear layer