Summary notes of the thirty-fifth meeting of the LHC Commissioning Working Group

 

Tuesday November 20th, 14:00

CCC conference room 874/1-011

Persons present

 

Minutes of the Previous Meeting and Matters Arising

There was only one comment on the minutes of the 34th LHCCWG meeting. Namely Django reported that despite of the high reliability of the database we learnt about in the last meeting, there had been a 4 hour down time due to a database failure in the following week. Mike confirmed that the applications were presently running with the logging database. Roger replied that, as we were told, the new server will become available only in March 2008, and the database reliability should be greatly increased after this date. Mike added that LEP was running with an online database for 10 years and the technology should have improved since then.

 

Status of Design and As-Built Ring Aperture (Stefano Redaelli) 

Stefano explained the structure of the aperture-model presentations. He would first discuss the requirements for LHC operation, and then describe the status of the LHC aperture model. Brennan would next present the transfer line aperture model, and Mike the possible implementation in LSA, before Frank S. would conclude with a talk on apertures in the MAD-X online model.

 

The requirements are manifold: easy access from the CCC to all available information, on-line link to known non-conformities, on-line aperture model for operation, link to results of beam-based aperture measurements, and history tracing.  Stefano commented on the various meanings of the word “available”, which should mean at least “organized in a data structure (aperture model) for higher level usage’, but ideally be “whatever I need to do, a tool exists which can provide without effort from my side the information that I need in the format suitable for the tools that I use”, which may be difficult to fulfill regarding the multitude of requirements.

 

Stefano’s presentation concentrated on the “mechanical aperture”, which is to be distinguished from the “beam aperture model”. The latter contains models for mechanical tolerances, optics parameters etc (see J.B. Jeanneret and R. Ostojic, LHC-Project-Note 111). Therefore the famous “n1” number is one example of what was not discussed in his talk. However, the “beam aperture model”, including n1, does indeed rely on the correctness of the “mechanical aperture”.

 

Stefano now described the design ring aperture model, the as-built model including magnet alignment errors, and the sources of the aperture model. The design ring aperture model was developed by a collaboration of AB-ABP, the Collimation Team, and TS-IC.  Stefano presented examples of apertures in various parts of the machine, and he illustrated the “Rectellipse” shape of the arc beamscreen. A number of associated tools were developed for estimating the performance of the collimation system, in particular to determine the impact points of halo particles in beam loss studies. The investigations demonstrated the necessity of a continuous aperture model and of a detailed description including beam screens and apertures at the BLM locations. A mathematica tool written by John Jowett can also extract and display a 3D model of the aperture.

 

An “as-built” aperture model was developed within the scope of the MEB activity, in parallel with the magnet-by-magnet analysis and slot assignment. At the moment the generation of this model involves manual editing by Thys Risselada. Five different specific classes of geometry tolerances are taken into account, e.g. there are tolerances for “golden dipoles”.

 

Following the LHCCWG of 28 February 2007 (WISE talk by E. Todesco) the generation of a more detailed “as-built magnetic model” was pushed forward by P. Hagen, E. Wildner et al, representing the detailed measurements of horizontal and vertical offsets of the cold bore along each magnet.

 

Replying to questions by Frank and John, Massimo explained that in the earlier “as-built” model the maximum deviation was used for the class assignment, with implied manual processing by Thys, while the latest model takes into account the detailed longitudinal profile, so that e.g. “n1” wiggles along a magnet are now seen. Stefano presented an example “n1” calculation based on the measured profile.

 

Mike asked about the implementation of the beam screen in this model. Massimo answered that some generic tolerances for the alignment of beam screen and cold bore are assumed. Ralph commented that the effects of magnet transport are not included, but only measurements on the surface, which implies a lot of assumptions. He inquired the change in mechanical aperture expected due to the transport.  Massimo responded that only measurements for a few bad cases are available, for which the overall shape remained the same as during the earlier surface measurement, with a maximum transverse displacement of 0.5 mm after transport.

 

Work in progress includes beam loss studies with the measured alignment errors. Taking into account the alignment errors along a single quadrupole, Q9 downstream of IR7, can change the local loss rate by about a factor four.

 

Now turning to the sources of the aperture model, Stefano explained that MADX generates the optics model based on the layout database and the magnet strengths. He stressed that ideally MADX should also generate the aperture model from the layout database information. Though one could also generate the model directly from the layout database, MADX is a more natural environment. An external manual input from the collimation team has been needed to complete the (design) aperture model. In particular, the layouts of the warm regions and detector regions were not included in the layout database until a year ago.

 

The sources for the as-built model are more numerous. They include the geometry database for the magnets (MEB activity, AT-MCS), and the TS-SU survey database which also contains non-magnetic elements. Samy and Ronny will implement the transfer of data to the TS-IC layout database.

 

Responding to a question by Brennan, Stefano and Massimo explained that elements without alignment targets are not included in the survey. Samy remarked that non-conformities are traced in MTF, and their inclusion into the layout database is not straightforward. Ralph mentioned a recent experience of a 10 mm offset over a 15 cm distance between the TAN and a collimator. Samy added that Ralph’s experience was not a singular case, and that the aperture of other elements, e.g. at the TAS, are also different from their original design value. Jan noticed that the control of the apertures falls under the responsibility of the vacuum group, and that a lot of information exists, but that nobody checks for consistency. Ralph asked whether there is an official approval process for such type of non-conformities, e.g. via MTF. Brennan replied that a few checks are made whether any non-conformities have an impact, but only in the form of a “piecemeal treatment”.  Ralph reiterated that somebody must check the apertures and alignment around the machine, mentioning possible candidates. He insisted on the need for an approval scheme. Samy commented that sometimes changes can end up in the format of an ECR. As another example, Brennan reported the wrong tilt of all elements in point 6, where the transverse position could be corrected, but all elements are still tilted. The question was raised where such information can be found. Massimo suggested that this information should be part of the survey database. Samy next mentioned the example of D1, which is 8 mm shorter than nominal. As a consequence, the magnetic center of D1 is displaced by 4 mm longitudinally. Survey indeed has this information. Ralph inquired the transverse offsets of elements, adding the specific question whether the TAN would also have a transverse offset in the database. Samy responded that sometimes the survey applies a certain smoothing, but hopefully they would keep this type of information. He also mentioned that a link exists between the layout database and the geometry database.

 

Frank S. asked for the person who is overall responsible. Ralph replied that it might be Bernard Jeanneret. Massimo corrected him that Bernard cannot be in charge of all changes and inconsistencies, and that only sometimes he would receive an ECR for approval. Jan suggested that somebody should redo all aperture checks with the actual misalignments. Samy commented that the most difficult part is the vacuum data. Ralph remarked that many people who visit the tunnel are surprised about the actual directions of the beam pipes. Stefano posed the question about who would have the responsibility to ensure that the model is correct. Paul commented that this issue may become a persistent problem which might continue during the life of the LHC.

 

Stefano finished his presentation with a couple of conclusive remarks, some of which were the following: The ring aperture model can be used as input, but it does not yet provide the full functionality. MADX should be employed as general platform. The automatic extraction of the complete aperture model from the layout database is not yet operational. Extraction tools were available only for the cold sections and BPMs, and so far only for LHC version 6.500. The good news is that all the required aperture information now is available in the layout database. An automatic extraction of the complete model is promised for the end of 2007.

 

Replying to a question by Brennan, Massimo explained that MAD generates output such as TWISS tables which can be used by any other system. John commented that the original aperture model was based only on markers, but that since 2003 apertures à la carte are available based on MADX, with continuous apertures. Ralph remarked that too many markers and too many details would be unusable in MAD-X. Mike asked for a description of the continuous functions. John and Stefano replied that this term refers to interpolation tools providing any desired information, such as one implemented by John in mathematica, which uses the latest data from Stefano’s MADX file.

 

Stephane commented that this continuous function presently describes an aperture model derived from the nominal aperture only, but in reality there are aperture errors, which must be included as well. John, Stefano and Stephane all agreed that this more complicated continuous model can be constructed, at least in principle. Roger posed a question about who was going to do it. Massimo added that the database is available, and that we should keep the current approach, namely maintain the database in MAD-compatible format, in order not to block other studies, e.g. collimation simulations.

 

Brennan commented that assuming all information was fed into the layout database, the same complete information should also be included in the aperture model. However, the correctness of information is always to be checked at the source. Jan elaborated that the members of the vacuum group enter into the database what they think they have installed. However, unless they are asked specifically they do not normally check for offsets of beam pipes.  Massimo commented that if we cannot rely on the database we are basically lost. It was remarked by Stefano that nobody is in charge of this work now. Ralph mentioned again that he thought Bernard was responsible. However, Massimo corrected him, clarifying that Bernard was attributing to this activity at most 10% of his time. He recommended that somebody else should do it, but not necessarily a member of the collimation team. Brennan and Jan promised that they themselves will check everything in LSS6 and also in the injection regions.

 

Ralph stressed that this discussion had underlined the numerous problems we are facing, and the clear need to look at all non-conformities.

 

=> ACTION: Appoint aperture czar?

 

Transfer Line Aperture Model (Brennan) 

Brennan reviewed the status of the aperture model for the transfer lines, addressing scope, status, sources and maintenance, the use in CCC, and outstanding items. The transfer lines comprise the regions TT40, TI8, TT60, and TI2, in total about 6 km worth of description, ranging from the SPS QDA419/619 exit to the LHC MSI septa exit.

 

Full transfer-lines models are available as madx seqedit files. All discrete line elements are included, and so are the apertures for every element in the sequence, indicated by markers, and an “arbitrary’ resolution with slicing and interpolation. The models were checked against drawings and sequence. Since no aperture information was available in the layout database, private aperture data and custom scripts were used to construct this model. Some further work, e.g. transverse misalignments for a few elements, is still required.

 

Frank S. asked whether the additional information would ultimately go into the layout database. This question was addressed at the end of Brennan’s talk.

 

The transfer-line aperture model was deployed in the control room. It was successfully used for TI8 and TI2 commissioning, in various online and offline applications (for designing bump knobs, reference during threading, analysis of aperture problems). A GUI was implemented in matlab for the visualization of Twiss functions; no interface yet exists with either YASP or LSA. The future development route should be the same as for the ring aperture.

 

As an example, Brennan showed a picture of the TI2 physical aperture up to the first LHC element. The aperture in units of sigma is displayed including errors, on top of all BPM locations. The measurements show about +/- 10 sigma aperture, while the specification was +/-6 sigma. The difference is explained by the orbit quality which is much better than what was assumed in the specification (4.5 mm peak)

 

Responding to a question by Paul, Brennan explained that this calculation assumed generic survey tolerances.

 

Now turning to data sources and maintenance, Brennan described that the preprocessor had generated the MADX aperture from the vacuum layout. In the future it is wished to generate the vacuum layout from the layout database, rather than the other way around. Other wishes include the online availability in the control room (implementation in LSA/MADX/YASP?), some modifications to MADX (aperture misalignments independent of magnet misalignments – which implies aperture elements coexisting with magnetic elements; and markers at identical s locations occurring in the correct order), and vacuum sequences derived directly from the database (scripts needed, maintenance, inclusion of non-conformities).

 

Brennan was now ready to answer the earlier question of Frank S. The answer is yes, it is planned to feedback the vacuum layout description back into the layout database. It was also clarified that Thys combines the MADX aperture together with other MADX descriptions when generating the complete model for the transfer lines.

 

Stefano asked whether the model used was the design model with nominal tolerances. Brennan replied, yes, but that one should apply the same procedure as for the ring, if the planned solution is sensible, pointing out that the present solution of Stefano does not yet contain all the elements. Stefano remarked that the transfer-line model was not much better than the model for the ring. Brennan replied that this model provided a complete description in s direction, but he admitted that it was not yet “as built”. Also, he knows that some chambers are horizontally misaligned, which must still be included.

 

Stephane inquired whether one single number was used for the tolerance. Brennan answered in the affirmative: indeed for the moment the tolerances were generic. Stephane too concluded that about the same level of model accuracy had been established for the ring and the line. Samy remarked that also for the transfer line he would like to implement a direct link from the database as for the ring. Brennan: agreed that all information should come from a single source.

 

Possible Implementation in LSA (Mike) 

The three LHC on-line models are the “On-line Model” proper, FIDEL, and the aperture model. Prototypes are either in place already, or prototyping is in progress. A PERL script looping over MAD puts data into the LSA database, which allows pulling in the optics strengths and elements into LSA. FiDeL coefficients including decay and snap back are taken into account in the settings generation. The plan is to follow a similar implementation as for the aperture model, namely to use MAD as input for the apertures. Mike showed an example MAD output containing aperture values.

 

Brennan or Jan commented that there were a lot of zeros in the aperture values. Stefano replied that these zeros would be filtered out, and that 0 was the default value.

 

The LSA aperture model should provide a full description for both rings, including the design aperture plus any alignment errors, as well as access to known non-conformities, functions to derive the aperture at any given element or location, the ‘continuous model’ as a mathematica function (from John), the beam-based aperture model, an access to the measured closed orbit and bumps, and movable elements. It should also extend to the injection regions and transfer lines. 

 

Applications of the aperture model in LSA will include aperture scans, background optimization, collimator scans, injection-region scans, dynamic checks of bump amplitudes.  

 

Mike’s proposal is to extract an extended element list, including vacuum chambers, from the layout database, import the MAD aperture model into LSA, and to provide various client APIs in JAVA, e.g. an API to return aperture, an API to return closed orbit, and an API to return collimator position. A possible interface with mathematica will provide the continuous function if this is thought to be necessary.

 

Jorg asked for the treatment of overlapping elements, e.g. the beam screens inside a magnet. Mike suggested that the position was part of the parameter key. Stephane commented that transverse offsets of 2 mm over 10 cm distance do occur, and that this implies the need for a full model.  He elaborated that in MADX there are two ways of doing this: markers (prohibitive number) or tables representing the position in x,y vs s, very similar to the multipole description of the magnetic field. John commented that the present MAD implementation was written and used for n1 plotting, but that for other applications it was not a convenient representation. Along the same lines, Ralph recalled specific questions for collimation, such as tertiary and secondary beam halo, nuclear interactions in the collimator jaws etc. He shared John’s view that for some of these applications the existing model cannot be directly used.

 

Massimo commented that deriving the LSA model from the MAD aperture model puts more emphasis on MAD.

 

The Aperture in the LHC Online Model (Frank S.) 

Frank S. reviewed the LHC on-line model and the complete LHC model, before showing some examples from the SPS.

 

The LHC on-line model is meant to simulate the LHC commissioning tasks. It collects all available magnet measurements in order to construct a complete model. This model is complemented by beam-based measurements and statistical analysis. It will supply applications for LHC commissioning. Clear interfaces to the control system are required to fulfill the objectives.

 

The complete LHC model includes the proper optics taken from the  CAMIF CVS repository of CERN’s accelerators, the field errors from FiDeL/WISE, the misalignments, the aperture data (also taken from CAMIF), and a refined model based on PTC. PTC allows taking into account 3D locations, building girders etc.

 

Frank S. concluded that the complete model of LHC includes a model of the aperture, and that this aperture model is an integral part of the on-line applications. During commissioning the model aperture can be related with beam size, bumps, trajectories, etc.

 

Jorg asked whether we may obtain the aperture for bent trajectories in a dipole. John commented that this information could be obtained from an excel programme. Brennan remarked that we need to know where the beam is. Jorg recommended that instead of pursuing parallel efforts, we should have one proper joint approach. Massimo agreed and pointed out that the data source is always the same.  Jan concurred saying that the problem in fact lies in the correctness of the data in the database. Samy pointed out a few issues, e.g. that the rectellipse parameters are not appropriate for racetrack apertures, and that the tools available for checking the consistency between adjacent vacuum chambers cannot always be used. 

 

Verena commented that we will want to check whether the present n1 is consistent with the specification. Stephane remarked that the general question was to which level of accuracy we need to go. Ralph replied that extreme accuracy was not mandatory for collimation, which requires an absolute knowledge of 0.2-0.3 mm (Delta n1 =0.1 or 0.2). Stephane inferred that this means we require a description inside each magnet. Ralph suggested that not all information was needed online. It was commented by Stephane that the sagitta error of the main dipoles can be as large of 2.5 mm for the worst magnets. The cold bore of such dipoles was also kinked in the industry in order to bring the magnet flanges back to tolerances for interconnection purposes. As a result, the transverse excursion of the cold bore (which is followed by the beam-screen) can change by 1-1.5 mm over a short distance (less than 1 m, leading to a longitudinal variation of beam clearance which is much faster than what is expected from the closed orbit). Stephane concluded that this kind of information shall be made available in the online aperture model of the machine, at the minimum by approximating the shape of the dipoles by a parabola or, if this turns out not to be sufficient for pathological magnets, by polynomials of larger order (e.g. polynomial of order 9, as had been proposed by AT-MCS during the production, which, from experience, in all cases provides a very accurate description of the dipole geometry).

 

Gianluigi inquired which main problems were found in the injection region.  Jan responded that a few elements were missing, and/or some numbers typed wrongly into the database.  Some elements were found to be mistaken or of too small dimension. It appeared that the holes between design elements are filled with some ill-defined procedure. Gianluigi asked whether there existed no drawings of the vacuum chamber between elements. Jan summarized that the overall work was excellent, but some problems had nevertheless been encountered. He also pointed out that the injection region is one of the most difficult places in the LHC. Ralph recalled another problem where collimator vacuum chambers in point 1 worked well, but the same chambers did not fit for point 5, since some chamber sizes had been changed as a result of a TOTEM request.

 

Paul suggested bringing up the aperture question at the LTC. Roger will follow this up.

 

=> ACTION: Report at LTC

 

Next Meeting

Tuesday December 4th, 14:00

CCC conference room 874/1-011

 

Provisional agenda

 

Minutes of previous meeting

Matters arising

Phase 9.5 Bringing on the experimental magnets (Helmut)

Most robust SPS filling scheme (Magali)

Beam commissioning of transverse damper (Wolfgang Hofle)

AOB

 

 

 Reported by Frank