Summary notes of the fifteenth meeting of the LHC Commissioning Working Group

 

Wednesday October 18th, 14:30

CCC conference room 874/1-011

Persons present

Minutes of the Previous Meeting and Matters Arising

No comments to the minutes of last meeting.

This meeting was devoted to preliminary discussions on database issues for the LHC. This follows up an open action from Chamonix06 that will be discussed at the next LTC meeting of November 8th.

Many members of the LHCCWG are excused (a workshop is ongoing in Valencia on LHC upgrade issues).

Stefano replaces Frank as a scientific secretary.

 

Databases (Mike Lamont)

Mike gave an overview of the following databases: Layout, FESA, MAD, LSA-controls, Measurement and Logging databases. As Roger pointed out, this list does not contain all the databases used for the LHC. Mike intended to focus mainly on the ones that are more closely related to the LHC controls.

The layout database is the main source of information, which most of the other databases rely on. This is the LHC reference database that "is designed to store all data pertaining to the collider, its components, their layout, their manufacturing as a large unified tool". Mike showed a list of all the elements that are covered by the layout database and showed example tables to illustrate how the data structure looks like.

The FESA database is the source of all the FESA devices and of their properties. When it will be completed, it will contain the properties of most systems addressable by the control system. Triggered by a question from Roger, Mike reminded that, for example, the power converters will not be included in the FESA database because they will not be controlled via FESA. Jan asked if the FESA database is linked to the layout database. Ronny Billen replied that there is not a direct 1-to-1 link between these two databases. The FESA one is derived from the layout database (see next item).

The layout database is the origin of the MAD sequence files. MAD output files are used to feed the LSA database with element configurations. The layout database is used to populate the layout database with power converter information, circuit configuration, etc.

The Measurement database is basically an holding pen for the Logging database: up to 7 days of measurment data will be stored before been filtered, reduced and passed to the logging database for a permanent store. Details of the architecture of the logging system, as setup by Ronny Billen's team, can be found in Mike's slides. Ronny commented that the proposed architecture for the data logging has proven to be a good success. Now more clients that originally foreseen expressed interest in using this system.

 

Mike also briefly showed the layout of the LHC Software Application (LSA) architecture and recalled a few important concepts that the controls will rely on, such as context configuration, optics models, configurations of devices and their properties, parameter configuration, settings and trims, various rules (make-, link- and incorporation rules). This illustrated how the various databases discussed before will need to be linked to each other. For each device, the settings of the relevant parameters are calculated within a given machine context. For a pulsed operation, like at the SPS, the contexts can be cycles, super-cycles or beam processes. At the LHC, contexts will be injection, ramp, squeeze, etc. The settings of each context are defined for a given beam optics, which is taken from Twiss tables calculated with MAD. These are used, for example, to calculate the strengths of each magnets as time-functions that are loaded into the hardware and trimmed by dedicated time events. Also discrete trims will be possible but this feature is not yet well implemented and tested. Incorporation and "make" rules are used to convert the hardware parameters (e.g. current of a power supply) into physics parameters such as tune or beam energy. Pre-defined "link" rules are used to connect subsequent time functions defined for a device (for example between the "beam out" process between two consecutive cycles). Illustrative examples are given in Mike's slides.

Stephane asked if it will be possible to include hardware limits into the settings definitions within a machine context. Mike replied that this will be the case for the LHC but for the moment this feature is not yet ready. Notably, critical settings (e.g. BLM trasholds and collimator positions) will also be dealt with.

 

In conclusion, Mike stated that the LSA on-line database is a well developed model that should become the sole repository for the on-line control for the LHC devices. Measurement and logging databases are basically in place. Even if the LSA experience from this year's SPS runs is positive, Mike commented that various LHC requirements are not yet meet by the existing systems and will have to be addressed (for example, discrete trims and management of critical settings were mentioned).

 

 

The presentation by Mike triggered several questions. Stephane asked if the aperture model is included into the layout database. Samy Chemli replied that this is the case. Stefano pointed out that the aperture model used in MADX is not directly derived from the database (as it is the case, for example, for the optics sequences). Various manipulations and a significant amount of work were needed to setup the model which is presently being used for LHC aperture studies. Samy confirmed that this is indeed the case.

 

Roberto asked if the layout database is kept up to date by taking into account the various types of non conformities that are found during the hardware commissioning. This could include serious layout modifications that will affect the LHC (e.g. sextupoles that could be by-passed due to a short circuit). Samy Chemli replied that, in parallel to the release database version, there is always a study version that is kept up to date with most of the non conformities that are reported to the database team. This will become the "as-built" database, which includes for example elements whose positions have been shifted with respect to their nominal values. The electrical non conformities are for the moment stored in the MTF database and work is on going to find the best way to include this information in the layout database (Samy Chemli, Markus Zerlauth).

 

Samy's reply on the difference between "release" and "study" versions of the database triggered an animated discussion. Samy asked whether we really need the nominal layout database or whether the "as-build" layout could be sufficient. Mike replied that he sees no need for the nominal layout on-line because the as-build information is what is actually needed to operate the machines. Ronny Billen agreed and reminded that the operation of all CERN machines is presently relying only on as-build databases, which are updated every year. Paul stressed the importance of maintaining year after year the information of previous machine layouts in order to keep trace of the changes. This was very important for LEP.

Stephane instead thinks that also the nominal layout should be kept up to date. It will be useful to keep it as a reference and hence it should be maintained. Samy has actually had additional requests to keep alive the nominal machine layout, for example from J.P. Quesnel.

 

Roger suggested that we could freeze the nominal layout as a reference and keep alive only the as-build database. Samy Chemli replied that this will not be easily possible because the released version of the nominal layout is not complete: some vacuum layouts and some integration issues are not yet solved and in addition some elements are not yet included (e.g. the beam loss monitors). The means that, if one wants to keep the nominal machine layout updated, effectively two database versions have to available in parallel. This is what is being done for the moment.

 

In this case, why can't we freeze the database of the nominal layout even if there is some missing information (Roger)? Samy believes that this would be an additional source of problems. People will work on the nominal version and they will realize too late if it has become obsolete: we will have then to face the problem of implementing in the actual layout changes that were carried out on an absolete version (nominal instead of as-built). This happens regularly and is often source of mistakes! (Samy, Ronny).

 

Stephane asked if we could have a link between the two versions such that the study version would only be used to fill the holes of the nominal (incomplete) version. Samy replied that putting this in place would actually be more complicated then just keeping the two versions separately, as it is done now.

 

On the same line, Roberto asked if it could be possible to include into a database version the information on the non conformities that have been found. Samy replied that he would not know how this could be done for all possible types of non conformities. Ronny confirmed that in some specific cases this could be done. For example, at the SPS we keep trace of the cumulative alignment errors of the magnets which are found year after year. However, for other kinds of non conformities the implementation in a database is certainly not obvious.

 

Stephane states that one should include in the as-built database informations on the maximum current of magnets as it is found during the quench tests. Samy agreed but replied that he does not have this information. Similarly, he is not informed about other types of non conformities, such as the alignment errors of the D1 magnets and the error of the main dipole transfer function. Stephane and massimo commented that the transfer function errors should not be included into the layout database because they are included into the magnetic measurement database and will be treated as a multipole error within MADX. The implementation into the databases of the other non conformities remains indeed an issue.

 

 

Ideas on LHC as-built database (Paul Collier)

Paul had a first look at a high level interface for making LHC data (which is or will be stored in a wide range of LHC databases) easily available to users who do not necessarily have a detailed knowledge of what the data format actually is. Paul stressed that we should not build up another database but we should rather focus on collecting in a smart way the information that is already available in existing databases. We will need to be able to access the available data with meaningfull queries and also to make sure that the relevant information will actually be available.

More specifically, the idea as presented by Paul would be to build an interface layer over all databases, consisting of intelligent search engine and data mining systems, through which Google-type queries could be made. Some ideas of features that might be needed were presented;

Some examples of the kind of queries we could make were also presented;

Paul commented that the role of the LHC commissioning working group would be to specify a list of relevant questions and "user cases" that will help to define the specifications of the high-level interface layout. Our jobs will not be to actually build this interface itself. Paul reported that the TS colleagues have expressed interest in being in charge of the setup of the interface layer. Thomas Pettersson will present his viewes to the LTC.

A lively discussion followed. While there was a clear interest in the proposed facility, a number of concerns were also voiced. Largest among these was the amount of data that would likely be returned from the query, and the variation in format of these data. This would clearly be reduced more and more as the intelligence of the query were increased, but it was also felt that it would be difficult to specify in detail at this stage the kind of queries that could be made. Such a specification would be easier after some experience of operation. However, one could imagine that a limited set of likely queries could be defined, to use as a prototype. This could be done in a matter of a few weeks.

It was also suggested that instead of (or possibly as well as) trying to specify according to this kind of top down approach, the database people should be looking to build a more coherent platform from the bottom up, for example by making links between different databases. This should enable data to be easily retrieved from, for example, the current layout, MTF and magnetic measurement databases, so that any non-conformity in equipment would be visible.

Samy Chemli commented that it should be fairly simple to implement a link between various databases, for example between layout, MTF and magnetic measurerement databases. One could provide a common search engine to retrieve information from all databases at the same time. Ronny Billen stated that, as a starting point, it will be very important for the people in charge of the LHC commissioning to get familiar with the navigation tools across the various databases. He therefore believes that, until experience is not gained with the real machine, one should invest the required time to build easy to use navigation tools for the existing databases. The implementation of "intelligent" high-level search engines will be very challenging and could come later.

Jan commented that in some cases the search based on keywords could be jeopardized by the fact that often the proper keywords are not attached to the documents. This is for example the case with some MTF documents. On the other hand he agreed that we should start with less ambitious goals that was was proposed by Paul. The bottom-top approach proposed by Samy and Ronny is certainly useful and we should start with this. Samy warned that we should try to make sure that the system will not become too complicated and thus unusable. This is the risk if we start in parallel from to many different databases.

Various people expressed the interest of having cross links between different databases. For example, from the layout database some link should be available to the MTF document related to specific equpiment. Samy repeated that this can be done however he warned that for the moment some of the non conformities and the 3D integration issue could now be linked to as of today. This seems the main motivation to have links between different databases and hence this issue should be solved.

 

Next Meeting

Wednesday October 18th, 14:30

CCC conference room 874/1-011

 

Provisional agenda

 

Minutes of previous meeting

Matters arising

Commissioning of the inject&dump operation mode (Brennan)

AOB

 

 

 Reported by Stefano