sbppage
May 2-3 xbus errors Here
Remote Tuning Project list Here
chopper vxincn error rate Here
Earlier had modified the chopper error testing program: diagnose/programs/annoy2.f90
In $CESR_CONFIG/diagnose/annoy2.choices lists the modes of testing, by operation, mnemonic, and num
annoy2 appends results and conditions (sleep times, step size) to annoy.out local file.
diagnose/programs/annot_plot.f90 then displays error rate vs step size on log-log plot
Datalogger transition Here
SRF Processing Here
Sampling Scope Here
Chess bumping Here
Lintune Here
beam Here
burp Here
synt Here
loss Here
wxf Here
tau Here
orbit Here
tune Here
2013 syn ac Here
h25 viola Here
sec8hv Hysterisis
damp vs tune = Here
b1 5 damp = Here
tune1 xfr Temp
tune2 xfr Temp
temp xfr Temp
temp1 xfr Temp1
temp2 xfr Temp2
damp xfr b1
damp xfr b3
damp xfr b5
dead z break sept 23 2020
dead z break jan 2020
Dead zone feb 24 2023
Zero Jump feb 24 2023
e+ kly ph 5-8 @4 May here
SRF later 4 May 2021 here
l3rad 4 May 2021 here
gen_gui sampsc Here
gen_gui chs_bmp Here
10 June 2018: H tune noise L3 : H tune signal from L3.
Data taken with program auto_char/get_hnoise.f90
scope setup is $CESR_SCRIPTS/setup_noise.
scope set centered 284, span 20 kHz narrowband zoom fft.
As found here
Spectrum dominated by ~145.4 KHz and harmonics, but many side peaks as above
Terminate "V" coax in L3 spur here
Spect analyzer with open input here
L3 input and output here
L3 processing chassis here
Detail of first processing circuit here
24 Oct 2017
$ more dirgrow.out
compare sortdir.2017aug22 with 24 Oct data
/nfs/cesr/online/lib/classlib 113 112 1 Mbytes
/nfs/cesr/online/www/html 1505 1503 2 Mbytes
/nfs/cesr/online/ms/sched 81 78 3 Mbytes
/nfs/cesr/online/acc_control/program_info 602 588 14 Mbytes
/nfs/cesr/online/lib/tools 566 548 18 Mbytes
/nfs/cesr/online/www/elog 4272 4245 27 Mbytes
/nfs/cesr/online/machine_data/savesets 23139 23107 32 Mbytes
/nfs/cesr/online/machine_data/lattice 228 194 34 Mbytes
/nfs/cesr/online/instr/CBPM 18555 18498 57 Mbytes
/nfs/cesr/online/machine_data/meas 13744 13680 64 Mbytes
/nfs/cesr/online/instr/log 21949 21778 171 Mbytes
/nfs/cesr/online/instr/xbsm_daq 1064 549 515 Mbytes
/nfs/cesr/online/acc_control/bin 28879 26544 2335 Mbytes
/nfs/cesr/online/machine_data/logging 440709 431763 8946 Mbytes
/nfs/cesr/online/lib/Linux_x86_64_intel 787931 752227 35704 Mbytes
/nfs/cesr/online/instr/data 2744985 2547179 197806 Mbytes
20 sept 2017
Create diagnose/programs/dirgrow.f90 to compare disk useage at 2 times.
Uses outputs from diagnose/programs/dirsor.f90, which uses output of:
du -b --max-depth=2 /nfs/cesr/online > online_dir
Note dirsor only shows Megabytes, (avoiding 2**32 problem), so only
growth > 1 Mbyte gets displayed by dirgrow. Also reminder that dirsor only
outputs subdirectories 2 levels below specified dir (= online in this case).
$ more dirgrow.out
compare sortdir.2017aug22 with sortdir.2017sep19
/nfs/cesr/online/machine_data/savesets 23108 23107 1 Mbytes
/nfs/cesr/online/lib/classlib 113 112 1 Mbytes
/nfs/cesr/online/machine_data/lattice 196 194 2 Mbytes
/nfs/cesr/online/instr/log 21781 21778 3 Mbytes
/nfs/cesr/online/acc_control/program_info 595 588 7 Mbytes
/nfs/cesr/online/lib/tools 559 548 11 Mbytes
/nfs/cesr/online/instr/xbsm_daq 1064 549 515 Mbytes
/nfs/cesr/online/machine_data/logging 435670 431763 3907 Mbytes
/nfs/cesr/online/lib/Linux_x86_64_intel 770007 752227 17780 Mbytes
/nfs/cesr/online/acc_control/services 435495 20219 415276 Mbytes
Timing vxgetn VACUUMSTATUS non-opt (irp 3)
Timing vxgetn VACUUMSTATUS optimised (12)
diagnose/programs/error_dis.f90
Ring distrib Using error_dis
Tuesday 16 May after orbmon fix :
Timing vmgxbr CSR TC MON MPM40-60-69-102
Tuesday 16 May downday test : 69 is 6x slower
Timing vmgxbr CSR TC MON MPM40-60-69-102
As of 22 aug 2017 these are biggest (2 levels down) directories in cesr_online
( du -b --max-depth=2 /nfs/cesr/online > file )
then ran ~sbp/one/dirsor to arrange dir with 5 "/" by size and add tot at end
Only directories 2 levels down from online, clipping at a "mere" Gigabyte
Megabytes
1152 /nfs/cesr/online/mslog/impedance
1503 /nfs/cesr/online/www/html
2111 /nfs/cesr/online/acc_control/mpmdb
2556 /nfs/cesr/online/mslog/cesrta
3902 /nfs/cesr/online/ms/mslog
4173 /nfs/cesr/online/group/vacuum
4245 /nfs/cesr/online/www/elog
9361 /nfs/cesr/online/group/soft
13680 /nfs/cesr/online/machine_data/meas
14410 /nfs/cesr/online/instr/xbsm_tools
18228 /nfs/cesr/online/acc_control/embedded
18498 /nfs/cesr/online/instr/CBPM
20219 /nfs/cesr/online/acc_control/services
21778 /nfs/cesr/online/instr/log
23107 /nfs/cesr/online/machine_data/savesets
26266 /nfs/cesr/online/instr/anal
26544 /nfs/cesr/online/acc_control/bin
431763 /nfs/cesr/online/machine_data/logging
752227 /nfs/cesr/online/lib/Linux_x86_64_intel
2547179 /nfs/cesr/online/instr/data
3991867 /nfs/cesr/online/
Eg 4 Terabytes total
After April 2017 xbgetn cleanup and moretime improvements:
Compare mpm only /non-Xbus ops on un/loaded RTEMS (40 is unloaded)
Timing vmgxbr CSR TC MON MPM40-60-69-102
  (VS 8 copies of tchop using MPM5500-102)
Timing vxgetn CSR QUAD CUR MPM40/60
Below is older stuff
Operation |
VMS |
RTEMS-40 |
RTEMS-102 |
RTEMS-69 |
Elements |
1 |
98 |
1 |
98 |
1 |
98 |
1 |
98 |
vxgetn |
2900 | 24000 |
1880 | 8800 |
1630 | 9520 |
17400 | 43150 |
vmgxbr |
67 | 783 |
90 | 260 |
170 | 1190 |
8900 | 11300 |
Nov 18 moretime vxgetn/vmgxbr
CSR QUAD CUR CESR2A
CSR QUAD CUR LINUX CESR202
After removing mpm5500-40 slow code
CSR QUAD CUR LINUX CESR104
CSR TC MON CESR2A
Increased time/ele 240 -> 445us: mostly from longer wait for distrib card
Put pre-emptive sleep on mpm40 before doing busy wait: make vxgetn
Timing vxgetn/vmgxbr CSR TC MON Linux
Timing vxgetn/vmgxbr CSR TC MON VMS
quick with eff 0 to 8 on unloaded Here
eff unchanged on unloaded Here
Effect of kludge for 22-47 ele range Here
... APS
... CA/GOES
... RAD
... SSEC
... Beamloss
... Online
... MSlog
eye to eye . . .
. night day
. chores
. dishes
2 sep lat2 vs 5q var . qh=10.0x add vars at h sep
2 sep 2/5/11 var init . qh=10.0x var from 2q south of sep to l3
4 hsep, no vert, oct 2008 TA layout: with Q1,Q49==0
but Q0,Q2 Q48 Q49A active: adjust q5 strength below .5 and re-opt
confirm k (abs vals) in .04 to .5 range
As of 5 may 2008
qh,qv 10.21 9.50 max beta v,h 38 meters Ibun=6.0 mA emitt 1.10e-7
Q0 Q1 Q2 Q48 Q49A Q49 k= -.29 0.0 .26 .225 -.250 0.000
CF old (not same ele) .............. +.517 -.398 0.317
will need some bus changes in north?
HSEP 8w 44w 44e 8e = +5.53, .47, ,-.38, -3.35, e-4 rad
CF old (apr 28 2008)= +5 , -6, +6, -4,
very much weaker 44w; may try to zero
LAT file is ~sbp/lat/4sep/chess_20080505a.
beta/eta is here
may 12 2008 .. ~sbp/lat/2sep/chess_h8ew_20080512.
TA layout, hsep 8e/w only: optimz results . . . beta/eta pix
qv/h 9.601, 10.212 . . max bv/bh 36/37 m . . e+/e- Hemitt 1.12/1.09 e-7
G line source +10.2 mm, -.33 mrad
may 12 2008 .. ~sbp/lat/se_not/chess_no8e_20080512.
TA layout, hsep 44ew,8w: optimz results . . . beta/eta pix
qv/h 9.603, 10.19 . . max bv/bh 36/39 m . . e+/e- Hemitt 1.13/1.16 e-7
G line source +11.7 mm, -.7 mrad . . . (not in spec)
27 Nov 2008 .. ~sbp/lat/id_dev/oneb_h1280_v1020_57nm.
TA layout, hsep all off, H tune=12.8 optimz results pix
qv/h 10.2, 12.8 . . max bv/bh 37 m . . Hemitt .57 e-7
28 Nov 2008 .. ~sbp/lat/id_dev/oneb_h1371_v1020_53nm.
TA layout, hsep all off, H tune=13.7 optimz results pix
qv/h 10.2, 13.7 . . max bv/bh 37 m . . Hemitt .53 e-7
Mar - Aug 2007 tunnel temps here
Feb - June 2008 tunnel temps here
Apr 2009 See TA Z-distrib from input xy
cerl8.0, no errors pix
April 14 2010 - report4 working: using CERL8.1 look at 3 wigglers
(5/25 m in SA, and last 5m in NA to highlight what damage done)
compare full (2000 gaussian particles) pix
to (50 gaussian particles) pix
Suggest a lot of work can be done with smaller data sets.
Also see bigger/fewer plots pix
(2 wigglers in SA, 3 plots/loc)
2010-08-04 Position sensitivity, vert Quad offsets in CERL8.1
Sensitivity of Vertical centroid position to Vertical offsets in all
quads: no sign of other than linear change in (1 sigma in center motion)
for range of 1 to 500 nm vert offsets, using 200 cases, all quads,
clipped at 3 sigma:
Multiplier (beam motion/quad motion) is in range of
3*s to 6*s (s=microns original size) ie 18x at SA.CELLB01 wiggler to
40x at NA.CELLB03,4 where starting beamsizes are both 7 microns.
Sensitivity grows only 2x from 2nd to last wiggler: See that much of
motion arises in TA, where vert beta is far larger than Horz, and
before first source; also effect of quads in LA is fortunately diluted by
acceleration. Test effect of moving 1 quad at start of SA by 10 microns
(beta 100, Kl=.13) and see 12x motion at later 100 m beta point.
Emittance from beam added to table; NOT linear with error amplitude.
Full sensitivity tables follow.
14 wigglers, vert offset σ =10-500 nm: cm motion
Histo of cm change for 14 locs here
Histo of cm change for 4 locs here
Orbit for 1 sample case: σ =20 nm; orbit plot is : here
TAO Command was: change -silent var quadyo[*] .000000020*ran_gauss()
2010-08-26 Position sensitivity, Horiz Quad offsets in CERL8.1
Sensitivity of Horizontal centroid position to Horizontal offsets in all
quads: Smaller effect than Vertical case above.
14 wigglers, Horiz offset σ =10-500 nm: cm motion
Histo of cm change for 14 locs here
Histo of cm change for 4 locs here
2010-09-01 Position sensitivity, X_Pitch errors in LCAVITIES in CERL8.1
Sensitivity of centroid position to X_Pitch errors in all LCAVITIES
14 wigglers, X_Pitch σ =1-100 uradians: cm motion
2010-09-01 Position sensitivity, Y_Pitch errors in CERL8.1
Sensitivity of centroid position to Y_Pitch errors in all LCAVITIES
14 wigglers, Y_Pitch σ =1-100 uradians: cm motion
2010-09-01 Quad K effect CERL8.1
Sensitivity ??? quad k change
14 wigglers, Quad k1 change σ = 1 - 30 pp thousand : table
2010-09-01 Bend error effect CERL8.1
each bend independent
14 wigglers, Bend change σ = 1 - 30 pp thousand : table
Linac improvement notes 7 june 2010
1. Catastrophic failures vs "Excessive" tuning
1.1 Exemplar: Synch Tranductor -> arc -> synch mag vac leak
120 hr loss, possibly all from loose connection? - otherwise linac
lost time small percentage of total runtime.
2. Tuning could be very automated if flutter and dropouts understood -
simultaneusly would improve average beam.
THIS has been case for literally for 40 years, not new.
Talk about monitoring rf, why nothing ever happens?
2.1 Very longstanding feeling that better recording of conditions
would speed diagnosis of where to tune, rather than doing general
tuneup every time. Need modest expenditure of time to keep
LINBPM system tuned up.
3. Much more frequent changes in energy and bunch configuration
increases tuning load - addressable via better understanding and
then implement software to adjust (eg loading comp); but this is not
a decline in reliability.
4. Again longstanding wish to have enough resolution in control and
readback (esp latter) to see correlations with beam current.