Summary notes of the thirty-third meeting of the LHC Commissioning Working Group

 

Tuesday October 23rd, 14:00

CCC conference room 874/1-011

Persons present

 

Minutes of the Previous Meeting and Matters Arising

There were no comments on the minutes of the 32nd LHCCWG meeting. Roger explained the rationale of the 33rd meeting, which addressed the pertinent LTC open actions in response to a suggestion by Oliver. Roger also discussed the schedule of future meetings, which include an LSA meeting on 5 November, an aperture model meeting on 20 November, and a discussion of the experimental magnets on 4 December.

 

LTC Open Actions (Roger) 

First Roger reviewed the list of LTC open actions. On his slides, the color code for different actions indicates the age of the task. Five actions were to be addressed in this LHCCWG meeting, which featured short presentations on the definition and handling of bad BPMs by Ralph S., on the combined squeeze and ramp by Mike, on the maximum acceptable momentum offset for the beam dump system by Jan, on procedures for detecting vacuum leaks by Frank, and finally on ion operation with varying beta* by John.

 

Other open LTC actions of concern to the LHCCWG were already addressed either in previous LHCCWG meetings or by other committees or working groups. One of these is the operation scenario for the BLM system, which was already covered in several talks by Laurette that could be bundled into an LTC presentation. Oliver commented that Steve wanted to specifically see a presentation how the BLM system will be operated. Another action was the specification of the LHC magnet cycle prior to each new fill. This topic was exhaustively treated by R. Wolf at the 30th LHCCWG meeting and details are being documented by him electronically. A third topic is training aspects of new EICs with respect to coordination of machine protection. Roger recalled that most EICs are participating in the LHCCWG, in the MPSC, and/or the MPWG, as well as in meetings on “safety during hardware commissioning”. No additional training is considered. A fourth item, the required collimator adjustments during the squeeze was covered in two LHCCWG and LTC presentations by Ralph and Massimo. It also figures as part of the procedures for phase A.11.

 

Jean-Pierre recommended the working group to compute the expected downtime due to the BLM system, and compare the calculation with the downtime of a real system, in order to thereby trigger a discussion about the level of reliability that can be expected in practice. Jan replied that the expected rate of false triggers for the beam dump had already been calculated in great detail. The expected number of 42+/-6 false dumps per year was published in LHC Project Report 812. Rhodri remarked that this number referred to failures occurring with beam present in the machine, whereas Jean-Pierre’s question might have been slightly different, namely how often we can actually inject the beam.

 

Classification and Detection of LHC BPM errors and faults (Ralph S.)

Ralph S.’s talk consisted of three part, formalizing the definition of a "bad BPM", examples, and test procedures, respectively.  He introduced the distinction between BPM 'error' and 'fault' or 'failure'. In case of an 'error' the beam position is acquired but with an inconsistency between measured and true beam position. A detected BPM 'error' may either lead to a correction, e.g. through a calibration or other adjustment of the system, or may cause a transition to the 'fault' or 'failure' state. A 'failure' refers to either a BPM error exceeding specified tolerances (see 'LHC-BPM-ES-0004 rev.2' for details) or the unavailability of the measurement. He notes that these terms should be distinguished from 'accuracy' (the maximum measurement error) and 'resolution' (the minimum measurable position change) which frequently come up in this context and are sometimes confused with each other.

 

As one possible choice, the BPM error can be further decomposed into an error of the measurement 'offset' and 'calibration factor' (scale). This decomposition is motivated by the fact that some measurements by design are either insensitive to steady-state offset (e.g. beam-based alignment procedures of the LHC collimators) or errors of the calibration factor (e.g. automatic orbit stabilisation where only the convergence speed but not the absolute convergence fix-point depends on BPMs and/or CODs calibration factors). [N.B. The maximum orbit steering tolerance limit is estimated to be about 100% beta-beat for the injection and 70% beta-beat for the collision optics (see last Chamonix workshop for details). The errors are shared between beta-beat and calibration factors.]

 

The discussion and analysis of BPM offset errors strongly depends on the choice of reference system. Depending on the reference system the deviation of the measured BPM offset varies between 100 and about 250 um. Since the aperture is the most critical parameter in the LHC, Ralph S. suggested that the beam screen is taken as reference. Using this reference, the misalignment between the geometric (=mechanical) centre of the BPM and beam screen is specified by the design to be less than 200 um. The electrical bias introduces another uncertainty between the electric and mechanical BPM centre of 100 um, along with another 100 um from the electronic calibration uncertainty. These add quadratically to the previous alignment uncertainties. Since the mechanical BPM-beam-screen welding has not been measured systematically, test with beam will be required to asses the accuracy of these numbers.

 

Systematic effects like the dependence of the BPM reading on bunch intensity, bunch length, filling pattern and temperature variation have been characterized. A dependence of the measured position on the temperature at the location of the DAB has been observed. Present results indicate that the temperature periodically varies by two or three degrees with a cycle time of about 6 minutes. The measured swing in the position reading is about 100 micron for a 3 degree temperature variation of the DAB. Frank commented that this might be a potential problem for meeting nominal tolerances on beam stabilisation at the collimators. Ralph S. replied that this systematic effect on the BPM readings could be corrected through a calibration using the measured DAB card temperatures and/or stabilized through a temperature control of the BPM crates in the first place. Rhodri commented that this cyclic temperature variation has just been seen recently, and that the temperature stability is likely to be improved.

 

=> ACTION: Follow-up on temperature variation of BPM readings (Rhodri, Ralph S., BI)

 

Ralph S. presented a list of potential failure sources and how they can be diagnosed. The intensity switch at about half the nominal bunch intensity must be synchronized with the orbit feedback. It has to be insured that all BPM crates are neither in 'CALIBRATION' nor 'INTENSITY' mode if requesting beam position measurements.

 

Three main lines of defense against BPM errors and faults exist:

1. Pre-checks without beam, which are used to check for and, if necessary, remove drifts of electronic components. Ralph S. emphasized that we must be sure to always switch back from calibration/intensity mode to measurement mode. The proper switching will/should be monitored by the LHC sequencer, BPM turn-by-turn data concentrator and LHC Orbit Feedback System.

 

2. Pre-checks with pilots and intermediate beams. These consist of two stages, one scan with pilot beam and retracted collimators and a possible second scan with collimators placed in nominal position.

For the first scan, free betatron oscillations are launched with varying betatron phase and increasing amplitude till a pre-defined safe limit. The amplitude limit is defined by the assumed available global aperture and a defined acceptable beam loss level. This test fulfils two functions: First, BPM readings that do not change will reveal faulty BPMs. This may be also used to verify calibration factors and/or optics at the same time. Secondly, the absence of beam loss or if the beam losses are within pre-defined limits gives an indication of whether the BPM offsets are consistent with the ones of the previous fill, the absence of large local orbit bumps and whether the established orbit corresponds to the 'golden orbit' for which the LHC Machine Protection and Collimation System has been commissioned/optimised for. The second scan is performed with collimators in nominal positions and with amplitudes that are compatible with the primary collimator location. The latter scan test is required to test the functionality of the collimation system. A failure of either scans indicates that the machine may be in a potentially unsafe state which requires re-tuning of collimation and/or protection elements.

 

Roger asked how often these aperture scans should be performed. Ralph S. replied that they should be planned at the start of every LHC fill. An aperture scan is estimated to take about 25 seconds/plane for 7 sigma amplitudes (sigma being the r.m.s. beam width). The proposed scans should be executed for at least the horizontal and vertical plane, possibly also for the planes that are tilted by 45° and 125°. The estimated total time requirement is only 1-2 minutes per ring, at both beam intensities. Frank inquired whether the collimators are retracted at the start of each fill, which Ralph S. answered in the affirmative. Brennan asked why the aperture must be checked on every fill. Ralph S.’s answer was that this is necessary to operate LHC as safely as possible. He explained in greater detail that first the global aperture model is tested with collimators retracted, and that then the collimation functions are controlled with collimator closed until a pre-defined loss pattern is confirmed. In all cases dead BPMs are removed.

 

Brennan commented that reaching a centering precision of 130 micron at each individual BPM would take much more time than 2 minutes. Ralph S. agreed, saying that the routine scans he proposes are only a global check. Responding to a question by Jan who wanted to clarify the reason for re-checking the aperture at each fill, Ralph S. confirmed that the underlying assumption is that there are drifts and change from fill-to-fill, caused for example by moving quadrupoles or power converter effects of the CODs.

 

Brennan and Verena agreed with the procedure proposed by Ralph, provided it can be done in a few minutes. Ralph S. recalled that the MPWG had addressed the question how to treat orbit corrector bumps, and determined that it was not possible to derive the absence of bumps from the magnet settings alone (see MPWG meeting minutes #53 for details; minutes: http://lhc-mpwg.web.cern.ch/lhc-mpwg/Meetings/Meetings-2005/No53-16Dec05/minutes-05-53.pdf; slides: http://lhc-mpwg.web.cern.ch/lhc-mpwg/Meetings/Meetings-2005/No53-16Dec05/RSteinhagen-05-53.pdf ). Therefore, we must always make sure that all protection elements are set up properly. Mike added that we would always first inject with retracted collimators anyhow. Helmut cautioned that the aperture might not be clearly defined, but that we might rather encounter bad beam lifetime, tails etc, e.g. a dynamic aperture which is not a sharp number. Frank commented that if there indeed were big changes from fill-to-fill, we might expect there to be also large drifts during a fill. Ralph S. replied that such drifts would also be monitored during the fill, and therefore they would not be a problem. He pointed out that the time between successive fills could be as long as 5 days or more.  Thorsten Wengler asked for the effect of the aperture scans on the experiments, in particular in terms of integrated dose, stressing that they might represent a new procedure which had not been foreseen originally. Alick added some supporting comments. Ralph S. answered that the collimators are retracted only for the pilot beam, which should not be a danger for the detectors. Brennan stressed the need to identify any problem early, which would be facilitated by the proposed procedure. Ralph S. remarked that if we lose beam at unexpectedly small amplitudes, we need to reiterate the collimation setting procedure.

 

Verena noticed that the primary collimators are located only in one betatron phase, and that therefore the check of one phase should be enough. Ralph S. responded that tests at different phases may be required, e.g. since the interplay of primary and secondary collimators could be relevant. Jean-Pierre suggested that we decouple the BPM test from the aperture tests. Oliver remarked that later on, with crossing angle bumps, we would need to watch out for non-closed bumps leaking into the arcs. Responding to another question, Ralph S. stated that we need only 100-200 micron orbit changes for the tests of the BPM calibration factors alone. In both cases, the scan duration is only partially limited by the ramp rate of the CODs but also by the time required to setup the measurement procedure.

 

3. The last line of defence against BPM errors and faults is continuous BPM data quality checks. LHC BPM prototypes were tested in the SPS. The most common types of failure are (1) no orbit info, (2) spikes (few micron to several mm), and (3) steps. Sampling the orbit at phase distances of Pi/4 provides for an intrinsic redundancy. Ralph S. illustrated the trade off between precision and robustness of the orbit feedback. The responses to spikes and steps are different. In the case of a spike it will be assumed that the orbit remains at the safe reference position. In case of a step variation, the feedback is stopped temporarily, and average orbits are taken before and after the detected step and the feedback continued with the new averaged orbit.

 

Walter asked for the effect of a gain error. Ralph S. answered that the switch from low to high sensitivity would correspond to a step change at the BPMs.

 

Ralph S. now presented a summary of BPM errors, failures, and states, where the type was distinguished by a colour code. For certain states, one could decide to stop the feedback.

 

The orbit feedback is estimated to cope with up to 20% missing BPMs, assuming that these are randomly distributed and not those located at critical regions such as the primary collimators or injection/extraction elements. Ralph S. emphasized the necessity to verify and to re-check deselected BPMs. More details on BPM errors, failures and feedback can be found in the report CERN-AB-2007-049.

 

Roger concluded from the discussion that we may need to refine the procedures, and perhaps will need to decide to do some steps only once per year, in line with the earlier comments by Jean-Pierre. Oliver confirmed that the presentation and discussion at this LHCCWG meeting were sufficient to close the action. Roger may present a short summary at the LTC.

 

=> ACTION: Follow-up on aperture scans at each fill? (Stefano, Ralph S.)

 

Jean-Pierre asked why the orbit feedback was not working if more than 20% of the BPMs were erroneous. Ralph S. responded that the probability for two or more BPMs to be missing in a critical region then becomes unacceptably high, according to a statistical study. Jean-Pierre observed that other machines have much fewer BPMs than LHC to start with. Ralph S. elaborated that if more than 20% of the BPMs are missing, the beam-orbit tolerances at the collimators are very likely to be exceeded. He also recalled that the LHC BPM readout is structured such that a single-crate failure does not lead to the loss of any consecutive BPMs. Oliver recalled that Benoit Salvant, when he was a technical student long ago, had studied BPM and scaling errors for the LHC with a similar finding that errors larger than 20% could not be recovered.

 

Massimiliano now came back to the aperture scan without collimators. He asked what would be done in case a problem was found. Ralph S. suggested that one possibility would be to close the collimators more tightly, and/or one could try to identify the region where the aperture is limited via the measured local losses. Roger commented that if the aperture was much smaller than expected, we would have to stop the normal operation and would need to understand the reason. Jan remarked that in the worst case the aperture scan may not be conclusive, but e.g. may exhibit a monotonic degradation of the beam lifetime without any clear threshold. Roger remarked that by definition a quick test cannot take much time

 

Replying to a question by Gianluigi on the minimum intensity that would trigger the BPMs, Ralph S. stated that a bunch intensity of 2e9 protons usually triggers the LHC BPMs, while an intensity of 1e9 only sometimes. Gianluigi then asked whether satellite bunches might always trigger the BPMs. Rhodri commented that the satellite bunches should be on the same orbit as the main bunches. Frank remarked that the resolution could be worse than expected if we unintentionally read out the position of satellite bunches instead of the one of the nominal bunches.

 

 => ACTION: Effect of satellite bunches on BPM readings and related tolerances (Rhodri, Gianluigi?)

 

Pros and Cons for Combined Ramp and Squeeze (Mike)

The arguments for and against combining ramp and squeeze can be found on the web page http://lhc-commissioning.web.cern.ch/lhc-commissioning/issues/combined-ramp-and-squeeze.html , following some general observations. The final beta value must be pre-established. Single-quadrant power converter limitations constrain the speed at which the squeeze can be performed. LHC will be starting at 7 TeV with the unsqueezed optics. This is a natural starting point for making progress through the squeeze.

 

A squeeze one the ramp would have the advantage that the currents of all insertion quadrupoles would decrease monotonically, and that two steps could be combined, offering a time saving of 20 minutes.

 

Counterarguments are that it is planned to squeeze the various IPs one by one during commissioning, that we want to squeeze beta* from 11 m to 2 in steps, that the combination of steps increases the complexity, which violates the “number one rule of commissioning”, that the PLL handling could be more complicated, and that the collimators must be closed during the squeeze, which would appear foolhardy. Brennan added that the last statement is very true also for the TCDQ.  Other complications may arise for the orbit feedback, machine protection, energy tracking, BLM thresholds, and crossing or separation bumps (the latter come down on the ramp).

 

Mike issued a recommendation that the software functionality for combined procedures should be in place, including standard settings generation. But in commissioning phases A and B the ramp and squeeze should be kept separate. For phase C we could anticipate machine development aimed at combining the phases.

 

Walter commented that avoiding hysteresis problems would be an additional argument for combing the ramp and the squeeze. Oliver remarked that down to beta* values of 1 or 2 m, the power converter functions are smooth, and that the last part of the squeeze, where these functions are highly nonlinear, can be done only at top energy anyhow, for aperture reasons. Therefore, the working group did not see a strong argument for combining the two.

 

Ralph S asked how many squeeze optics we will have. Mike replied that there are 10 to 12 per interaction point, and that we will need to think carefully where to go. Ralph S. inquired whether these optics solutions are pre-calculated in ELSA. Oliver responded that they exist in a modular database, and are not calculated on the fly.  A suite of settings is available for each IR. Mike showed the present naming convention describing various squeezes in different IPs. Oliver added that the phase advances across IRs are held constant during the squeeze, therefore any combination of beta* values in different IRs would be possible. He also observed that if John’s scheme for beta* leveling were successful, the arbitrary combination of different IPs could turn out to be great asset.

 

Momentum Aperture for Beam Dump System (Jan)

Oliver recalled Steve’s question related to the beam dump momentum aperture, from the 75th LTC meeting. Steve has asked whether the momentum aperture was a hard limit, and, if not, how much it could be relaxed. Jan reviewed different aspects and contributions, like the kick wave form, references, magnet strengths, and the integrated kicks in correctors. Then he stated that for a safe beam, the interlock on the rf frequency could be masked. However, with unsafe beam the momentum aperture cannot be increased, since the dumping system must also work if one MKD kicker out of 15 is missing.  Replying to a question by Frank, Brennan and Jan confirmed that the beam would be centered on the dump for a number of kicker magnets roughly in the middle between 14 (one missing) and the nominal 15. There is only a small asymmetry, which does not permit relaxing the tolerance for momentum offsets of either sign. The total momentum aperture of the beam dumping system is +/-0.45% as quoted at the 25th LTC. The acceptance budget of the dump channel has lots of contributions (optics, orbit, mechanical, kicker overshoot, beam momentum, ...): 0.9 % is allocated to beam energy variation (± 0.45 %), which consists of 0.1 % to DCCT resolution, ±0.2 % to the integrated horizontal dipole field from the orbit correctors, and ±0.2 % to the RF frequency. The discussion in the LHCCWG#22 was about dispersion measurement with radial steering by varying the RF frequency. This gave the limit of ± 0.2% quoted by Jan for this particular manipulation at the 22nd LHCCWG.

 

Vacuum (Frank)

Frank reviewed the LTC open action on procedures for detecting vacuum leaks in the LHC, which was raised at the 36th LTC on 9.2.2005, when Steve presented a follow-up of Chamonix XIV. The issue is that a localized He leak can cause quenches, but at the same time may not have any effect on the beam emittance or the beam lifetime. The He front propagates very slowly (a few cm/h). The standard beam loss monitoring system, every 53 m, may be unable to detect the leak. Alternative solutions that had previously been proposed are a mobile “BLM snake” with 2 m distance between BLMs (Bernard Jeanneret) and measuring the ionization current for a debunched beam as it had been done in the ISR and AA (Alain Poncet). The  second technique would require biasing the BPMs or mobile electrode stations mountable on the pumping ports.

 

Frank next summarized the corresponding presentation and paper by Vincent Baglin from Chamonix XIV. The He density over 1 m must be less than 1.7e17 He/m^3 to avoid quenches with nominal beam, which corresponds to a pressure of 33 nTorr at 1.9 K or 420 nTorr at room temperature.  The heating during a quench will liberate the helium atoms that will then condense inside the adjacent cold magnets. The following diagnostics is available: (1) vacuum gauges: placed every 3 or 4 cells (320-428 m) i.e. 6 to 8 gauges per arc; (2)  BLMs: every 53 m at each arc quadrupole and inner triplet; (3) the heat load of the cold masses:  ~ 1 W/m over one cell can be measured (local quench level ~ 9 W/m at 7 TeV), needs ~6 h;  (4)  mobile vacuum gauges or residual gas analysers, which can be installed locally , in SSS, every 53 m, for leak detection (the vacuum group wishes to spread the quench till the nearest SSS); (5)  mobile radiation monitors: 32 can be installed per cell; they are sensitive down to “50 nTorr” (at which temperature?); time required is ~¼ h [Thijs], (6) warm up of cold bore to 4.2 K with He pump out for leaks < 4 10-6 Torr.l/s. Vincent had also pointed out that the existence, level and position of He leak shall be clearly identified prior to any intervention one the capillaries or exchanging a magnet. Frank noticed that from Vincent’s paper it seems to be more difficult to detect leaks with a high leak rates than those with a lower rate.

 

Frank next commented that the radiation monitors (mobile, installation time ~1 day) are the most sensitive instruments, followed by the cryogenic heat load, the BLMs, and the vacuum gauges, in this order. Typically the sensitivity decreases by a factor 5-10 between two successive diagnostics tools in the list above. Vincent did not discuss the “snake” BLMs [Jeanneret], and neither measuring the ionization e- current on biased BPM or pumping port electrodes [A. Poncet, CERN MT/95-01 (ESH), LHC Note 316]. Concerning the BLMs, there are three fixed BLMs per beam at each arc quadrupole. For additional mobile BPMs 2 spare channels per card are available at each arc quad, quoting an earlier presentation on the BLM system at LHCCWG#24 by Laurette.

 

Frank mentioned another idea, recently proposed by Fritz Caspers, which is to use the reflectometer for finding local He bumps, and which would, however, require further verification.

 

Frank closed with several outstanding questions: Can we indeed bias the BPMs? Do we have the mobile radiation monitors? Are the additional arc gauges and arc RGAs available? Is it easy to access the information on the cryo heat load per arc cell? Do we have snake BLMs? And can we assess the reflectometer sensitivity?

 

Jean-Pierre commented that the “BLM snake” was part of the LHC baseline and available. Laurette did not think this was the case, however.

 

Roger concluded that we do need to get a report from the vacuum group. Oliver suggested to treat the vacuum commissioning as an independent commissioning phase described at the same level of detail as the other commissioning phases. He remarked that it would in particular be nice to have a structured catalogue of procedures for the vacuum running-in. 

 

=> ACTION: report on vacuum commissioning plan from AT-VAC

 

Replying to a question by Mike, Frank confirmed that all quench tolerances quoted referred to the nominal beam intensity. He added that the quenches were not the only limit, but elevated vacuum pressure could also lead to a background problem, which would lead to tighter tolerances that furthermore could scale differently with total beam current.

 

Beta* Tuning for Ions (John)

John first described the motivation for beta* tuning in ion operation. A plot of the luminosity vs single bunch current shows the operation lines of the nominal and the early ion schemes. The single bunch intensity has a lower limit determined by the sensitivity of the beam instrumentation, predominantly the BPMs and the FBCT. Limits on the total luminosity arise from quenches due to bound-free pair production and possibly collimation. John presented the expected luminosity evolution during a fill for different numbers of experiments, taking into account burn off, IBS, radiation damping, rf noise, beam-gas multiple scattering, etc. Even with three experiments, one can still meaningfully store the beam for ~10 h with beta*=1 m. At beta*=0.5 m, however, this is no longer the case. The average luminosity and optimum fill length also depend on the turnaround time.

 

A long time ago, A. Morsch proposed adjusting beta* as the ion intensity decays. A gain is possible with higher initial intensity, e.g. 1e8 ions per bunch where the average luminosity increases by some 30%, as has been demonstrated by A. Nicholson in Beam Physics Note 79 (2004). John concluded that the beta* tuning will be important for heavy ion collisions since the total number of experiments is large, and it would also be a useful proving ground for LHC upgrade. It is not yet clear how difficult beta* leveling would be experimentally. Some exploratory attempts of changing beta* by 10 or 20% in MDs at RHIC (knobs by Wittmer et al) were successful in the first attempt. John added that probably it is not worth investing effort on the details now, but that we should be ready for around 2010.

 

Roger supported this recommendation. Jean-Pierre cautioned that the Tevatron had early on thought about a similar scheme, tried it, and found it to be nearly impossible to be implemented in practice. Oliver reported a similar impression from the discussions at CARE-HHH BEAM07. Somewhat in disagreement, Frank pointed to the talk on beta* leveling by Valeri Lebedev at BEAM’07, where Valeri explicitly reported that the Tevatron had never tried leveling and that there was no reason for it to try it in the future either, as both the CDF and D0 experiments can handle the maximum pile-up rates. From the experience of Tevatron operation, Valeri nevertheless drew an optimistic conclusion regarding the possibility of single or few-step beta* leveling at the LHC. Jean-Pierre pointed out that in private conversations with FNAL accelerator physicists he had inferred a different, rather negative perception.  Mike more optimistically commented that at LEP it had been possible to do mini-ramps in collisions.

 

 => ACTION: survey attempts of, and experience with, beta* tuning at other colliders (Jean-Pierre) 

 

Next Meeting

Tuesday November 6th, 14:00

CCC conference room 874/1-011

 

Provisional agenda

 

Minutes of previous meeting

Matters arising

LSA Meeting:

- Settings management (Mike and company)

- Operation software tests (Mike and company)

AOB

 

 

 Reported by Frank