Summary notes of the twenty-fifth meeting of the LHC Commissioning Working Group

 

Tuesday May 8th, 14:00

CCC conference room 874/1-011

Persons present

 

Minutes of the Previous Meeting and Matters Arising

There were no comments on the minutes of the 24th LHCCWG meeting. Roger reviewed the plan for the 25th and the following LHCCWG meetings. On 22nd of May Helmut will give a dry run for his talk at the LHC MAC. Rudiger suggested that Jan might also give an LHCMAC dry run at the LHCCWG, time permitting.

 

Roger informed the working group that the day before a meeting on the triplet repair was held with Ranko, where Katy Foraz presented a new LHC schedule plus a proposal for the LHC cooldown and hardware commissioning. The 450-GeV run is dropped for the moment. The reduced objective is a sector test in Sector 8-7. The time and dates allocated for this sector test look tight and ambitious. According to the new schedule, the full LHC beam commissioning would start in late April, early May 2008. Massimiliano asked whether it has been clarified that the sector test will not go further than point 7. Roger replied that from his understanding yes. Rudiger commented that it would be up to the LHCCWG and LTC to say whether there is a large interest in extending the sector test to point 6 or not.

 

Next, some comments were raised on the future LTC meetings and the procedures. Oliver recommended that a discussion of the sector test be scheduled at the LTC, e.g. around end of June or in July. Roger described the present plan which foresees a sector test summary at the LHCCWG in the beginning of June, and a presentation at the LHC MAC in the middle of June. He proposed that the sector test could afterwards be treated at the LTC.

 

Concerning the procedures, the approach is to keep up the momentum and to circulate EDMS documents for the various phases. Roger presented an example EDMS document for the first phase A.1. These documents are “prepared by” the EICs and the LTC presenter, and “checked by” all LHCCWG members. The approval leaders could be Roger, Steve for the LTC (or the LTC itself?), and some others.  Oliver remarked that the approval should be done by a person, rather than by an anonymous body. Rudiger noticed that comments should come in the “checked by” phase. Brennan cautioned that comments may also be submitted through the approval list. Rudiger and Brennan then discussed the two types of EDMS approvals: the approval leaders and the rest (a long list at the bottom). Jean-Pierre asked whether an approval by the hardware owners and by Philippe Lebrun would be needed as well.  Oliver referred to the EDMS documents specifying beam instrumentation, and suggested to check the consistency of the procedures with the hardware specifications. Roger will check with Steve the members for the various lists on the EDMS documents. Oliver commented that the machine protection may be crucial too, but, as Roger pointed out, not for this particular example phase, A.1.

 

 

MAD-X Online Model (Frank S.) 

Frank S. outlined the structure of his presentation. He first described what the online model is not, and then what it rather is. Next he discussed the team and the tasks, the SDDS MAD-X version, the present status, the scheme of the online model, knob applications and tests, milestones.

 

The online model will provide no real time control (its response time is several minutes). It will not interfere with operation, but is meant to assist. Lengthy offline analysis is not part of it either. Also it is no re-invention of established tools.

 

Its main function is to be used to check critical adjustments before sending them to the machine. The SDDS MAD-X version is the principal engine of the LHC online model. The measured magnet errors from the FIDEL and WISE teams are included. They need to be complemented by the results of beam-based measurements. Frank S. reiterated that the main idea is sending knob values to MAD-X before sending them to the real hardware. He stressed that experimenters and operators should profit from a  fully functional tool. The online model will also help to speed up the offline analysis.

 

Frank S. has the overall responsibility of this project, He also provides MAD-X support, and takes care of the SDDS interface; Werner Herr created the SDDS MAD-X version; Ilya Agapov takes care of the interface to the control system, of beam-based model adjustments, application development, etc.; Thys Risselada handles error routines etc., Rogelio Tomas beta beating; Yiton Yan from SLAC will come later for implementing a model-independent analysis;  Jorg Wenninger and Verena Kain represent SPS operation and the 1000-turn BPM system; Mike Lamont and the LSA team are responsible for the general control system and SDDS issues; and Alastair Bland provides TECHNET support. Many more contributors were mentioned.

 

A special SDDS MAD-X version was provided by Werner Herr. It includes converter to and from TFS tables. Only 1-dimensional SDDS tables are considered. As for the status of MAD-X, all relevant features of MAD-8 are implemented. MAD-X is complemented by PTC so as to include features of modern design codes. MAD-X has been successfully used for all pre-accelerators and transfer lines at CERN. A full database with MAD-X beamline and ring descriptions exists from the PS booster to the LHC.

 

The maintenance of MAD-X is organized in a special way. One custodian (Frank S,) is aided by a large number of module keepers. Frank S. emphasized that Etienne Forest provided half of the code, i.e. PTC. Further MAD-X development is ongoing for CLIC, from which LHC also profits. New features include the non-linear matching to any order.  Frank S. highlighted the reliability of the code, the results of which are continually benchmarked against independent SixTrack and PTC simulations. Some speed issues in MAD-X still need to be addressed.

 

The interface of the online model with several databases and with the LHC was discussed in the following. Inputs to the model are provided by the CERN Accelerator MAD-X Input Files (camif), by beam data, by the Fidel/WISE magnet models, and from other databases (survey, aperture). Information from the beam is important, e.g. the proper closed orbit. The corrected model is based on beam data, which are fed back to the WISE and Fidel models.

 

As a generic application, Frank mentioned a trim which is first checked in MAD-X, and only afterwards sent to the hardware. Alternatively, the knobs can even be calculated in MAD-X. Frank S. displayed a list of possible knobs, such as closed orbit (feed down to linear optics), orbit bumps, tunes, coupling, chromaticity, crossing angles, beta matching, and non-linear correctors. A number of examples illustrated the potential:  (1) linear coupling measurement at the SPS (R. Tomas); (2) compensation of the skew sextupole resonance in the PS booster (P. Urschuetz, M. Benedikt); (3) effect of beta beating on resonance driving terms, e.g.,  SPS discrepancies were resolved by including the closed orbit (M. Hayes et al); and (4) sextupole driving terms measured along the SPS, where a comparison between model and measurement revealed that one sextupole was accidentally disconnected.

 

Files of the MAD-X online model will be archived for later off-line analysis. Beam-based corrections to the model are fed back to the Fidel database.

 

=> ACTION: Feedback of beam-based corrections to Fidel/WISE (Luca Bottura, Frank S.)?

 

Milestones include consolidating the SDDS tools in 2006; defining SDDS structures for the SPS, LHC, and transfer lines; discussing the mechanism to exchange knob settings to and from the control system; development of applications needed for on-line modeling; a non-linear model of the LHC via Fidel/WISE; testing at SPS (knobs and lattice checks); and to be ready for the LHC start up.

 

Oliver commented that many examples were given, but that the orbit and linear optics would be the most important parameters at the beginning. 1000-turn data simulations would be particularly useful. He asked whether one can pass on the simulated turn-by-turn data to the normal application programmes (or alternatively read the 1000 turn data from the machine into the MAD-X model). It was proposed that aperture data, collimator positions and masks be included in the model so as to be able to check machine protection and tolerances. Ralph suggested that the model be connected to the trim database, pointing out that the collimator settings are stored in this database. Frank S. replied that we obviously need to have these settings included in the full model. Oliver reiterated that one should use the same software for displaying the measured and the simulated information. Frank S. replied that a GUI developed at DIAMOND can display both measurements from the machine and simulated data. The GUI and its functionality are somewhat different from Verena’s GUI. Oliver stressed once again that one should use the same interface. A first successful milestone for the MAD-X online model was its test at CNGS commissioning. Mike noted that at least we have agreed on using the same data format for input/output to MAD and the control system, i.e. SDDS [See Mike’s presentation in the LHCCWG Meeting of 22.03.2006]. Jean-Pierre emphasized that a simple interface is the key, quoting LEP as an example. He asked about the interface between the real machine and the LHC lattice.  Ralph stressed the need to be careful when running simulations on the consoles in the control room. Stephane asked about the switch from the model simulation of a knob to the real hardware knob, highlighting that this point is very critical. Roger agreed that the danger may exist that one mistakes the online model and real machine. Frank S. replied reassuringly that there should be no confusion.

 

Responding to a question by Stefano whether specific tests are planned for the SPS, Frank S. replied that, yes, he was collaborating with Jorg on such checks.

 

=> ACTION: Standardized interface for measurements and on-line model simulations (Frank S.)?

 

Two Beam Operation (Ralph)

Ralph discussed the two-beam operation, with input from a number of people. Further comments are highly welcome. More precisely this presentation covered the Commissioning Phase A.6: 450 GeV – Two Beam Operation. No crossing angle, no collisions, and a maximum 156 bunches are considered. Ralph pointed out that Phase A.5 must be gone through before the beam intensity can be increased.  He assumed that the aperture in this phase is understood at the 0.5-1 mm level, orbit and optics are adjusted, both stored beams are characterized, and the automatic machine protection is operational. This phase represents the first time that we will have two independent beams stored in the LHC.

 

Major issues include the separation bumps which are set to have roughly the correct separation; the first common orbit correction; the commissioning of common beam diagnostics; the K-modulation in triplets which is done for each beam separately; the equalizing of radial offsets in case of different integrated BdL in the two rings; equalizing of the beam characteristics e.g. current and emittance; adjustment of the injection timing; verification and adjustment of separation bumps for maximum triplet aperture; and the set up of two-beam collimators in IR2 and IR8.

 

Oliver asked whether no special rf synchronization is needed. Andy replied that the rf frequency is the same for both systems and they are automatically synchronized.

 

Ralph next discussed the separation bumps in some detail. Without separation bumps the two beams would collide at the IPs (or close to them). The separation bumps must be established before injecting the two beams. This can be done beforehand for the individual single beams. Bump knobs for the different IPs exist, and they can be varied deterministically once the beam is centered in the triplet and once BPM offsets are known. At injection and without crossing angle, sufficient aperture should be available in the IRs. Rough aperture checks can be performed with crossing bumps turned on. Ralph proposed that the separation be constant in normalized coordinates and that the fields not be ramped during the energy ramp.

 

Stephane pointed out that if the corrector strength is held constant on the ramp, the orbit will change with energy, which could affect the orbit feedback. He stressed that it makes a big difference whether the orbit is changed at one particular point in time and energy, or whether it is changing continually. A continuous change would imply that we abandon the idea of a golden orbit which is stabilized by feedback. Frank remarked that no field change would imply that the separation in normalized coordinates shrinks during the ramp. To keep the normalized separation constant, the field strength would need to be raised with the square root of energy. Stephane agreed.

 

Ralph next showed the orbit changes and additional dispersion introduced by the crossing bumps.

 

The following item addressed was the orbit correctors. The settings of common correctors should be zero or minimized initially. An orbit correction with common correctors would be done with both beams present in the machine. Such correction might be especially important in case of significant triplet misalignments. An increase of the beam intensity is possible after this step. In two-beam operation, each beam should have the same position readings as for the single beam. This can be checked by dumping one beam, which requires the possibility to dump a single beam. Rudiger later confirmed that this possibility exists.

 

Ralph now turned to the BLMs. Some cross talk from beam losses to BLMs located near the other beam is predicted. This effect should be measured and compared with expectation.

 

As next step, Ralph proposed a K modulation in common triplets for the two beams, which will provide the BPM readings corresponding to the situation that both beams pass through the magnetic center of the low-beta quadrupoles. Stephane commented that the simple K modulation will center beam in the magnetic center, whereas it had been decided to align the beams at the center of the mechanical aperture. The known difference can however be taken into account when steering. Ralph commented that under these circumstances one will need to use common correctors for compensation.

 

Rudiger asked whether tests of K modulation are planned during hardware commissioning. Frank mentioned that Rogelio Tomas had followed up the general question of LHC K modulation with Freddy Bordry, Bernd Dehning and Stephane Fartoukh, after a similar suggestion by Steve at the LTC of 11.05.2007. In some detail, the result of Rogelio’s inquiry was the following. Freddy Bordry did not know the situation of the K-modulation and AB-PO is presently not involved in this subject. Bernd Dehning referred to his discussion on the LEP and LHC K modulation at the LCC of 13.02.2002, recalling that we must distinguish between the single powered magnets and circuits with magnets in series. He further elaborated that in the power converters of the magnets a harmonic excitation function is implemented, which allows for a frequency and amplitude adjustment (verified with Quentin King, the accuracy should be sufficient - to be confirmed). The magnets which are connected in series, either cold or warm, have no possibility of one-by-one modulation. The option of installing a separate power supply was studied but not regarded as useful (LHC Project Note 347 by Andre Verdier: K modulation in the LHC arcs?). Rogelio’s question concerning possible hysteresis effects during K modulation had been addressed for the MBs and MQs with measurements executed using magnets in the LHC string (see LHC Project Note 319 by V. Granata et al: Tracking and K-Modulation Measurements in String II). The hysteresis was here found to be a small effect for the cold magnets. Stephane commented that Andre had answered a very specific question on detecting the relative misalignment of the MQs and BPMs. More recently, at the LTC, Steve had asked another question, namely if we have a back-up solution, as alternative to turn-by-turn data, for linear optics measurement, in particular for beta-function measurements. Stephane suggested that the possibility of an ac-exitation of the individual arc magnets via the voltage tap could be reinvestigated (it was proposed in the early beginning at the PLC). Bernd thought that this was indeed not excluded, adding that the discussion in the past went in the direction of adding a small power converter (1 to 10A) to every magnet. This case had been studied regarding the electrical ground insulation and the use of the available wires for connecting the coil to the outside.

 

Back at the LHCCWG meeting, Mike commented that K modulation can be performed by an appropriate modulation of the current delivered to the magnets using the real time channel of the power converter control system. A high level application will be required, but it should be straightforward to implement. Any current modulation will, of course, have to respect the time constants of the circuits involved. 

 

=> ACTION: Follow up of K modulation (Mike)?

 

The next items which Ralph addressed were the equalization of the radial offsets in the two rings, which had been addressed earlier by Gianluigi and Andy, and the adjustment of the injection timing. Rough phasing could be done using the beam induced signals in a common BPM, or by the single-beam wall current monitors in IR4 (known cable lengths). The monitoring of injection buckets (from fast BCT) should be operational. Rhodri pointed out that the reference signal must come from the RF, as the fast BCT itself has no reference to check for the correct bucket.

 

Beam characteristics, like current, emittance and lifetime, should be equalized. Brennan asked for the tolerances, and Ralph replied that these are relaxed in this phase of the commissioning. Helmut commented that when bringing the beams into collision, the tolerances are at the 20-30% level at full intensity. They should be much more relaxed at lower intensity. He emphasized that we should be prepared for very large tolerances, and that, in particular, we should allow for smaller emittances initially, but that, obviously, we should operate in a safe mode at all times. According to Helmut, differences by a factor of two might be OK in the beginning. The question was raised whether, if we cannot equalize currents, we should dump the weak bunches. Ralph remarked that at bunch intensities approaching 1.1e11 we will get visible single bunch beam-beam effects. In any case we should measure the beam properties and think about solutions as needed. The qualification of the beam parameters will facilitate the diagnostics of beam behavior during ramp, squeeze and collision. It will also prove important if unequal luminosity is delivered to the experiments.

 

Ralph now considered a fine tuning of the crossing bumps, which will be followed by an adjustment of the two-beam collimators TCLIA (injection projection) and TCTVB (triplet protection). Brennan commented that the TDI is even more critical, since here the beam must be must moved and the horizontal collimator location is fixed.

 

Concerning machine protection issues, Rudiger clarified that emergency dumps dispose of both beams, whereas non-emergency dumps can remove one of the two beams. An interesting discussion point was the sequence of injection for the two beams. Oliver recalled a presentation by Philippe Baudrenghien (at the LTC?) on interleaved injection, according to which this was possible, but where Philippe preferred sequential injection. Andy remarked that there are no fundamental problems with alternating filling, except for the need of two separate SPS injection cycles if the circumferences are different. 

 

Verena pointed out another possible problem with extraction from the SPS, however, namely that we cannot have just ONE generic cycle for both rings of the LHC (which might anyway not work due to energy differences between the two rings), because this requires the concept of "dynamic destination", which is possible in principle within the injector complex, but not yet for the SPS magnets managed by the ROCS. As possible solutions Verena proposed to either consider two LHC cycles per SPS supercycle, one for LHC ring 1 the other for LHC ring 2 with two different users, or to program a single LHC cycle per supercycle and two separate supercycles (one supercycle for LHC ring 1, the other one for LHC ring 2). The disadvantage of the first solution is that the supercycle becomes very long and if one wanted to work with only one ring or if there were problems in the other ring, the LHC filling would become inefficient. The difficulty with the second solution is that one cannot easily change between LHC1 and LHC2, since a pulse-stop and pulse-start of the SPS mains would be required. Ralph expressed the opinion that we will certainly need interleaved injection. Gianluigi and Stefano proposed another solution to the SPS problem, namely to always to have the extraction bumpers and magnetic septa pulsing both in LSS4 and LSS6 and to use the kicker extraction trigger in order to select one of the two extractions. Brennan stated that this was indeed possible in principle, but that it will not be done for machine protection reasons.

 

Stephane remarked that the tune and chromaticity in the 2nd ring must be monitored. If we only inject into one ring first, the pilot bunch in the other ring may have a low lifetime and suffer from weak-strong beam-beam effects. Rudiger suggested re-injecting pilot bunches in the second ring. Oliver could not quite follow the reasoning, as the same lifetime argument would apply to a higher-intensity bunch. The working group agreed to make this question the subject of a future meeting.

 

=> ACTION: Sequential filling or alternating filling of the two rings (V. Kain, S. Fartoukh)

 

Ralph concluded that this phase is not the most complicated, but one of the most exciting. It must be passed quickly but properly. The phase will provide the first detailed look at the experimental IRs. Soon the two-beam operation will become more challenging with 156 bunches, high intensities and lower beta*. Ralph’s talk contained a few proposals: (1) early K modulation in the triplets, (2) timing signals at injection (from fast BCT, calibrated with RF), and (3) selective beam dump.

 

=> ACTION: Follow up timing signals at injection (Philippe B.)?

 

Massimiliano asked what would happen without the separation bumps, indicating that the experiments would be happy to see collisions at this phase. Ralph replied that some collisions would occur, at a longitudinal position which may be offset depending on the quality of the bucket selection. Andy commented on the initial longitudinal uncertainty, of the order of one rf bucket. Ralph proposed an alternative approach to providing collisions at the experiments, namely to set up the separation bumps first and then to switch them off. He argued that we will need to establish the separation bumps anyhow.

 

Oliver asked when the D1/D2 adjustments will be done. John commented on the common BPMs, and the possibility of experiencing an interference between two beams at one of these BPMs when changing the relative timing of the two beams. Rhodri replied that there are only few bunches in the machine at this stage, and that this was an unlikely event. Ralph remarked that we will observe any interference at the BPMs it if we dump one of the two beams. Rhodri added some more comments.

 

Massimiliano asked for the definition of buckets 1 and their synchronization. He described one recently presented by P. Baudrenghien at the last LEADE WG: bucket 1 was defined as the first bucket (of 35640) coming just after the abort gap, while the orbit signal is synchronized to the crossing of buckets 1 at a given IP.

 

=> ACTION: Definition of bunch 1 (Philippe B.)?

 

Commissioning of Accelerator System - Magnets (Walter)

Walter gave this presentation on behalf of many people. It consisted of four parts: a brief overview of the magnet system, hardware commissioning, questions for beam commissioning, and conclusions.

 

As reference for the magnet system, he pointed to Chapters 7, 8, and 9 of the LHC Design Report, Vol. 1. The 28 powering subsectors are described in EDMS 361532. The SSS circuits are discussed in EDMS doc 104157. Walter also showed lists of circuits for the matching sections and dispersion suppressors, as well as for the separation regions and final focus.

 

The hardware commissioning (HWC) was long prepared. The hardware commissioning working group had its first meeting in 2003. This working group was succeeded by the hardware commissioning coordination. The strategy for the hardware commissioning is explained in EDMS doc 382004. The commissioning is performed in two steps; first individual system tests including electrical quality assurance; and then hardware commissioning activities proper, which culminate in the powering of all circuits (PAC). A training day for the hardware commissioning of the LHC powering system was held at CERN on 29 March. All presentations can be found in INDICO. Walter pointed out that the magnet system depends on many auxiliary systems, including cooling, ventilation, and power controls.

 

The status of the system when we start beam commissioning is as follows. The HWC will deliver a list of nonconformities. We may have to accept some of the resulting limitations. Hopefully the Sector 7-8 is not fully representative. In any case, repairs will be possible in this sector prior to beam commissioning.  At Chamonix 2006, Massimo established a list of magnets which are not needed for the first stage of beam commissioning: Electrical Circuits Required for the Minimum Workable LHC During Commissioning and First Two Years of Operation. The expected number of quenches during HWC was discussed by P. Pugnat, also at Chamonix 2006: Expected Quench Levels of the Machine Without Beam: Starting at 7 TeV?. The hardware commissioning will be completed by a dry run in which all magnets are powered through the whole LHC cycle.

 

Walter flashed the magnet performance panel (MPP) recommendations for 600 A circuits. There are a lot of limitations, mainly due to problems with the joints. He added a few words on the MPP, in particular its mandate, role, and functions. Primarily, the MPP actively contributes to the HWC effort. 

 

Nonconformities may affect machine performance, but they can also be tolerated for the commissioning.  An escalation diagram illustrated how nonconformities are dealt with during HWC when correcting actions may still be possible. MPP plays a central role in the escalation. Walter asked whether the LHCCWG should also be involved in the chain. After the meeting Massimo commented on the relationship between MPP and LHCCWG, emphasizing the need for the LHCCWG to be informed of relevant limitations.

 

Concerning the status of the magnet system during beam commissioning, no clear answer can be given at this time. The LHCCWG needs up to date forecasts on the expected performance.

 

Signals which will be available in the control room include fixed displays for the power converter status, and GPMA applications for voltages, currents, logics states, etc. Circuits which do not quench will not produce any QPS post-mortem data.

 

Stefano asked whether everybody in the control room should be able to perform this type of analysis. Andrzej Siemko replied that the target is a fully automatic treatment. Rudiger suggested that HWC and beam commissioning be distinguished. For the beam commissioning, quenches might be a rare event. Gianluigi commented that we may after all need to find the proper response to a quench in the middle of the night. Antonio remarked that post-mortem data also exist in many cases other than quenches.

 

As for what needs to be measured with beam, a long list of points was compiled at previous LHCCWG meetings. Noteworthy are quench levels for the BLM thresholds, and time lags for correctors with parallel resistors used for automatic feedback.

 

On the commissioning plan with beam, and the beam time required for commissioning of the magnet system, evidently this is closely linked with the beam commissioning. Settings and trims of magnetic fields are required for all phases of the beam commissioning. Pre-cycling of all magnets, transfer functions knowledge, stability and reproducibility, hysteresis model, and dynamic changes are also all important ingredients. The first settings will come from FIDEL.

 

There will be a need to update the model with feedback from beam-based measurements. Walter raised the question how this is done. A related issue is the real time channel for automatic feedback corrections, which should be taken into account for a correct analysis. Trims cannot be directly incorporated into FIDEL, but need to be analyzed and decomposed into FIDEL components.

 

Walter lastly turned to the power performance. He differentiated between three types of quenches: training quenches, false triggers, and beam induced quenches. The latter may force us to reconsider optics, collimator settings and BLM thresholds. The “quench and learn” approach for setting BLM thresholds at 450 GeV offers several side benefits.

 

Quenches during beam operation evoke a post-mortem analysis (manual or GPMA), followed by a circuit release and prescribed recycling. Walter compared the effects of cycling and degaussing for a corrector magnet. Pre-cycling prescriptions are being prepared by Rob Wolf.

 

Time lags for correctors with parallel resistors are an issue for PID feedbacks, and for compensation on the ramp. The inductance L can change by up to 30% with current (MQTL), resulting in a corresponding change of the magnet time constant.

 

Walter concluded that the status of the magnet system at the end of the hardware commissioning will directly impact the reach of the beam commissioning. Software for automatic PM analysis will help us deciding between three types of quenches. Pre-cycling prescriptions are being prepared. A tracking test in SM18 also is under preparation. It still needs to be clarified how corrections based on beam measurements are best integrated into the magnetic model.

 

Roger asked whether after a quench the control room crew will need to get a green light before carrying on. Andrzej answered that an interface is under development. This interface is needed anyway for the parallel HWC; the same interface can then be used in the CCC. Stefano inquired how nonconformities are taken into account by the control system. Mike replied that the maximum currents are stored in the LSA database, emphasizing the need for an “as-built database”. Antonio remarked that limits are also set on the power converters.

 

Rudiger commented that only few magnets are expected to quench. He asked about the optimum strategy for recycling. Andrzej answered that FIDEL should do the job.

 

AOB – Interlocking of the AC Dipole (Jan)

Jan had some comments on the AC dipole, following up the LHCCWG meeting from 4 weeks ago. The question on interlocking was addressed in a dedicated sub-meeting last week. Jan presented Rogelio’s simulations for an AC dipole with maximum strength on resonance, where beam loss occurs between turns 150 and 200. The beam is fully lost during 45 turns, which is comparable to the loss behavior for other failure modes. From this point of view, one cannot conclude that an AC dipole is too dangerous to be used. 

 

Jan summarized the interlocking foreseen for the different types of excitation kickers. For the MKQ there is no special interlocking, and both the MKA and the AC dipole can be used with safe beam. The AC dipole should not be able to apply a kick stronger than the design value. Constraints on AC dipole ramp and pulse rate are set by heating of contacts and transformer.

 

The keys will be located in the CCR, which is located at the back of the CCC. A key can only be removed in MKQ position. The beam interlock looks at the status of the HV relay, not at the key position.

 

Replying to a question by Oliver about the key owners, Jan explained that no ABP physicist will be allowed to take the key, but possibly a machine protection person. The details are yet to be decided.

 

Jan’s last slide discussed how to dump the excited beam. The beam position interlock at point 6 (which triggers beam dump for orbit changes in excess of 3 mm) will need to be maskable. In this case the safe beam may be outside the aperture of the dump channel at the moment when it is dumped, which constitutes a potential problem.

 

Jan drew the conclusion that the ac dipole requires intelligent use: One should be sure to use the AC dipole only with safe beam, and not dump the beam repeatedly with beams around the safe beam limit.

 

Jean-Pierre commented that the AC dipole is pretty safe in itself. Frank remarked that oscillation amplitudes of 3 mm at 7 TeV would probably correspond to about 10 sigma, and that this was much larger than what is needed for beta function measurements and also way outside the collimator aperture. He did not think it was likely that we would want to excite the 7 TeV beam to such amplitudes. Jan replied that at 450 GeV this amplitude could be reached easily. Frank commented that the 450-GeV pilot bunch was much less dangerous. Stephane confirmed Frank’s guess, saying that at 7 TeV the maximum AC-dipole excitation is 4 sigma, which is 1.5 mm. Therefore, indeed there should be no conflict with a 3 mm aperture, and no contradiction between the use of AC dipole and restrictions at point 6. Stephane suggested that at injection we may use only the pilot bunch with the AC dipole, and solely for this purpose mask the point 6 interlock. Jan cautioned about cases of bad start orbit or wrong excitation frequency. Verena remarked that the possibility of masking is not a function of beam energy, and cannot easily be made one. Jan reiterated that intelligence is needed for handling the AC dipole.

 

Next Meeting

Tuesday May 22nd, 14:00

CCC conference room 874/1-011

 

Provisional agenda

 

Minutes of previous meeting

Matters arising

Report from MPSC subgroup (Jan)

Commissioning of accelerator system - cryogenics (Gianluigi)

Experimental conditions and background in commissioning and operation (Helmut)

AOB

 

 

 Reported by Frank