The goal of our study is to determine the dependencies of the y-momentum and spin-momentum correlations in the stored beam on various injection parameters. In particular we will explore dependencies on:
Twiss parameters at T0 (the T0 detector is immediately upstream of the hole into the backleg iron of the g-2 magnet)
Correlations in the initial distribution
Injection trajectory
Introduction
There are three basic steps to the procedure of preparing data sets for analysis.
Running the bmad based tracking program 'g2_tracking'
The program 'g2_tracking' propagates the phase space coordinates and the spin of a distribution of muons through the injection channel and into the storage ring. A subset of those
particles is captured in the ring. The output is phase space and spin coordinates (as well as times, etc.) 'measured' at specific times, and or at specific locations after some designated
number of revolutions around the ring. The output is written to text files. Additional output includes moments of phase space coordinates.
Compilation and preliminary analysis of the program output
In order to collect sufficient statistics (numbers of stored particles) it is most efficient to run multiple jobs in parallel. There are a variety of programs available for compiling the
output of the multiple runs appropriate to different studies. We will use
energy_spin_vs_time.f90 for out study.
The program:
Combines the data from multiple runs
Determines time dependence of spin
Writes phase space files that include only those particles that are eventually stored
Running the program
Create a working directory. We will call it 'working'. (The program generates a lot of output (~13 G) so before proceeding
be sure that there is sufficient disk space available)
Additional files common to all runs are accessed via soft links. To create the soft links, in the working directory (/working/)x1 type
/home/dlr/development9_linux/g-2/softlink_lnx (and then carriage return) on the command line
Parameter input file
(input.dat or input_ref.dat if using submit_20.sh batch submission script)
There are many parameters defineable via the input file.
For the moment we restrict ourselves to the following:
Parameters of the distribution to be tracked and duration of tracking
nturns = 200
The number of turns around the ring. Only a fraction (about 1% ) of the injected particles are stored. A particle is 'stored' if it survives until the conclusion of the quad scraping process or abou 30 microseconds (201 turns)
nmuons = 100000
The number of muons pulled from the initial distribution and propagate through the injection channel and into the ring
muon_first = 0
The muons are pulled from a file generated by an upstream simulation (the initial distribution). There are about 1.4 million
particles in the file currently available to us. It is convenient to run jobs with 'nmuons ~ 100000'. Then in order to sample all
of the particles in the file, we start successive runs with muon_first = 0, then muon_first=100000, etc.
twiss :12-real - betax, betay, alphax, alphay, etax, etapx, etay, etapy, phix, phiy, gammax, gammay at reference point
We will be varying betay (here equal to 4.0) and alpha_y (here equal to 3.0)
In order to set the twiss parameters of the distribution at the start of tracking
Compute the twiss parameters of the input distribution
Create a transfer matrix, T, that propagates from twiss parameters of input distribution to desired twiss parameters
Propagate each particle in the input distribution with T
As a check compute the twiss parameters of the propagated distribution and write to the terminal.
When running in batch mode, the data written to the terminal appears in 7#####/log.dat To
Use grep -5 AfterPropagatingFromTwiss1ToTwiss2 7######/log.dat to comfirm that twiss parameters have been properly modified.
Kicker parameters
kickerFieldType = 2
The kicker field is computed using a map (kickerFieldType = 2) or from a multipole expansion (kickerFieldType=1)
kicker_sextupole_scale_factor = 1
The kicker_sextupole_scale_factor scales the sextupole component of the kicker field.
The sextupole term in the multipole expansion is multiplied by the kicker_sextupole_scale_factor (Nominal kicker field if set to unity) (Effective only if kickerFieldType = 1)
Settings for computing dφ/d(Δp/p) vs time
time_bin_width = 13.e-9
The phase space coordinates and spin coordinates of each particle are sorted into time bins with width time_bin_width
pbin_width = 0.0002
The spin coordinates of each particle in each time bin are sorted into momentum time bins of width pbin_width
time_binning = 'momentum_slice'
Instructs program to sort spin of each particle into momentum and time bins and to fit phi vs dp/p
Test run
In order to be sure that all relevant files are in place and that the input file is properly formatted it is good to run a test before submitting jobs to the compute grid
The program (g2_tracking) reads the file
input.dat - (This is where you set nturns, nmuons, muon_first, etc.)
To run the program: type
/home/dlr/development9_linux/production/bin/g2_tracking - followed by a carriage return
Note that each time the program is initiated, a subdirectory is created named as the date and time. All of the output files are written to this subdirectory.
Also, the input.dat file and the lattice files are written to the subdirectory for future reference.
Batch submission
The script g2.sh runs the exectuable g2_tracking in the appropriate directory. It should be modified so that it starts the jobs in your working directory
The working directory is set with the command cd /nfs/gm2/data2/dlr10/g-2/mytest/ydelta_correlation/example/
Replace with the directory in which you are running the program cd /nfs/acc/user/jsg344/working/
To submit a single job to the compute grid use the command
To check on the status of the submission use the command
qstat
And to kill a job that you have submitted
qdel 'job number' (which you get with the qstat command)
To get through all the 1.4 million particles in the 'muon_file' it is convenient to take advantage of the CLASSE compute grid.
There are a couple of hundred nodes in the grid. There is a quota of ~60 nodes per user at any one time.
We might submit 19 jobs with nmuons and muon_first so designated
nmuons = 75000
muon_first = 0
nmuons = 15000
muon_first = 75000
nmuons = 75000
muon_first = 225000
........................
nmuons = 75000
muon_first = 1350000
The process of submission of multiple jobs, each initialized with a sequence of muon_first specifications is automated with the script
Sets a counter to count the number of jobs that have been submitted
Rewrites the input reference file ref_input.dat with the appropriate value of muon_first to a new version of input.dat
Note: g2_tracking reads from the file input.dat. The script submit_20.sh creates a new
input.dat file for each submission of g2_tracking. It creates this file by copying ref_input.dat to input.dat with the replacement of the line that specifies muon_first. When you make modifications to the input file,
be sure to modify ref_input.dat rather than input.dat. For a test run (running g2_tracking from the terminal) with a modified ref_input.dat, first copy ref_input.dat to input.dat
Submits a job using the command qsub (with some qualifiers) g2.sh
increments the job counter
Waits for 30 seconds to be sure the last submission has read input.dat
Returns to step 1
Edit the script to set muon_first for each of the jobs. (Edit ref_input.dat to set nmuons so that it is consistent with the number of jobs.)
Take note of the line in the script that submits jobs
In your copy of submit_20.sh replace /nfs/gm2/data2/dlr10/g-2/mytest/ydelta_correlation in both of the places that it appears with a directory of your own. The -e and -o qualifiers direct
error message files ( g2.sh.e####### and g2.sh.0####### ) to the indicated directory. These files are generally very small and useless. You should remember to delete them when the jobs have run to completion.
In order to execute the batch submission script type
./submit_20.sh g2.sh
Monitoring batch jobs
To check on the status of submitted jobs use the command: qstat
Each job creates a new directory named as starting date and time. Use the command
ls -d2024* to get a list of dated subdirectories
To check that muon_first is correctly set in the input.dat file in each of the subdirectories use the grep command
grepmuon_first2024*/input.dat
Note that each batch submission also creates a subdirectory named by the job ID number. The terminal output is all directed to the log.dat file in the subdirectory
On completion, the last line of log.dat (in the job number subdirectory) is the job CPU run time. Try the command: grep7*/log.dat to get the CPU time
If for some reason you want to restart the submission process
Use the qstat command to identify all of the job numbers
Use the qdeljob number to kill the jobs, one at a time
Use the command rm -r2024* and rm -r7* to delete the subdirectories
Then you are able to try again with a clean slate,
In the input file, specify the range of directories with output to be collated
For example:
If dir_min = '20241010_061836' and dir_max = '20241010_061932', the data in those two directories as well as all directories with intervening times will be included.
If dir_min = '00000000_000000' and dir_max = '99999999_999999', the data in all of the directories will be included.
Specify the number of phase space files to include in the analysis and a list of the files
nargs = 20
phase_space_file() Note that phase space files labeled with times (i.e. phase_space_at_01098_ns.dat) must appear in termporal order in the list
Run the program
/home/dlr/development9_linux/production/bin/energy_spin_vs_time (The program can take many minutes to run depending on the size of the data sets and the number of directories)
The output phase space files generated by energy_spin_vs_time have the prefix all_, indicating that those files include
data from the entire set of subdirectories specified in the file energy_spin_vs_time_input.dat
The suffix <dat> indicates that the file is simply a concatenation of the corresponding files in the specified subdirectories
The suffix < dat_trimmed> indicates that the phase has been adjusted as necessary by adding or subtracting twopi so that it is within pi of the mean phase
The files spin_vs_momentum_2.6013E-05.dat and the like list g-2 phase in momentum bins at the named time.
The quantity dφ/d(Δp/p) is computed for the muons in each time bin by the main tracking program <g2_tracking>.
The data is written to the file <domega_vs_time.dat> The program <compile_domega_vs_time.f90> combines the
data from all of the subdirectories. The only input to the program is the range of subdirectories to include. The input is on the command line. For example:
/home/dlr/development9_linux/production/bin/compile_domega_vs_time 20241010_061836 reads the data from the single directory
/home/dlr/development9_linux/production/bin/compile_domega_vs_time 20241010_061836 20241010_061932 reads the data from both directories as well as from all directories with intermediate dates
/home/dlr/development9_linux/production/bin/compile_domega_vs_time with no input on the command line reads data from all of the subdirectories
The files from each run named < domega_ddelta_vs_time.dat> are combined with the program <compile_domega_vs_time>
into < combined_domega_ddelta_vs_time.dat>.
The three columns of the file <combined_domega_ddelta_vs_time.dat> are
time, dφ/d(Δp/p) and the uncertainty δ(dφ/d(Δp/p))
Fit momentum vs vertical amplitude (or vertical offset) and determine fit coefficients versus time
The program reads all of the files <phase_space_at_time_ns.dat_trimmed> and fits
momentum versus vertical amplitude with the quadratic function f(y_amp) = a+b(y_amp)+c(y_amp)^2.
The vertical amplitude is defined as amp_y = y^2/betay+yp^2*betay
Output is written to unit 600 - with fit parameters, and errors versus time.
The phase space files <all_phase_space_time_ns_dat_trimmed> are compared with each other and all muons are removed that do not survive to the time of
the reference file (usually chosen as the file with the next to latest time. The latest time is often incomplete.)
The suffix <dat_trimmed_sorted> indicates a trimmed file that includes only those muons that appear in the reference file (usually chosen as that file at the latest time). Each of these files
contains precisely the same muons, albeit at different times. This is true also for the phase space at the inflector exit <all_MARK_INFLECTOR_DS_phase_space.dat_trimmed_sorted>.
This file therefore includes only those muons that survive to the reference file time.
Plotting
Python
Setting up the virtual python environment.
See Python - linux: Look for 'Using a Python Virtual Environment, with Python 3.6.8'
After creating the virtual environment, install astropy and matplotlib
pip3 install astropy
pip3 install matplotlib
dp/p vs y
Use the python script
/home/dlr/development9_linux/g-2/python/plot_averagey_vs_x.py to plot phase space data ( dp/p vs y) including:
dp/p vs y - scatter plot
N vs dp/p, y - 2D histogram
< dp/p > vs y - fitted with quadratic
Examples
python /home/dlr/development9_linux/g-2/python/plot_averagey_vs_x.py "all_phase_space_at_#####_ns.dat" or
As written, the script generates a pdf of < dp/p > vs y - fitted with quadratic as momentum_vs_y.pdf
dp/p vs y_amp = y^2/ β + y'^2β
Alternatively use script /home/dlr/development9_linux/g-2/python/plot_p_vs_yamp.py to plot phase space data (dp/p vs y_amp, where
y_amp = y^2/ β + y'^2β and β=20) including:
Examples
python /home/dlr/development9_linux/g-2/python/plot_p_vs_yamp.py "all_phase_space_at_#####_ns.dat" or
Input is filename either domega_ddelta_vd_time.dat or if directories are combined by running energy_spin_vs_time.f90 then combined_domega_ddelta_vs_time.dat
As long as the jobs run to completion it is safe to remove the job number labelled subdirectories.
A good indicator that the job has run to completion is that the last line in the log.dat file contains the string CPU time. Check all at once using the command
Uses the Cholesky (lower triangular matrix) to transform random gaussian distributions to include computed covariance
To run the program type $pb/compute_correlations < file name of distribution to read > < number_events > and then press enter
At present the parameter number_events specifies both the number of events read from the file, from which the covariance
matrix is computed, and the number of events generated based on the transformation.
Tips
If you add the following line to /home/jsg344/.bashrc