Summary notes of the thirty-fourth meeting of the LHC Commissioning Working Group

 

Tuesday November 6th, 14:00

CCC conference room 874/1-011

Persons present

 

Minutes of the Previous Meeting and Matters Arising

There were no comments on the minutes of the 33rd LHCCWG meeting. Roger described the LHCCWG programme through the end of the year. An aperture meeting on 20 November is being organized by Stefano with presentations by Brennan, Massimo, Marek or Mike and Frank S. In the meeting of 4 December Helmut will address the experimental magnets, Magali the most robust filling scheme, and Wolfgang Hofle the beam commissioning of the transverse damper. Finally on 18 December the recent beam commissioning experience TT60 and TI2 will be presented by Jan and Brennan.

 

LSA Overview (Grzegorz Kruk) 

Grzegorz introduced the four LSA presentations for this meeting, which consisted of an overview by himself, a review of the LSA core by Wojciech Sliwinski, a discussion of the LSA database by Chris Roderick, and a tour of LSA deployment in the LHC by Mike.

 

Grzegorz’ overview talk itself was also structured into four parts, namely LSA scope, key concepts, architecture, and recent developments. Concerning the scope, LSA covers optics, settings generation, setting managements and trims, hardware exploitation, equipment and beam measurements, while it does not cover logging, fixed displays, alarms, software interlocks, or OASIS.

 

The first key concept is LSA parameters, referring to settable or measurable entities. The parameters are organized in hierarchies, starting from the physics parameters at the top and ending with hardware parameters at the bottom level. Grzegorz illustrated the concept with an example parameter space for reference currents on power converters. A second LSA concept is context. The context can be a certain super cycle, a cycle, or a beam process for a given accelerator or transfer line (i.e. a specific process like injection, ramp, or extraction within a cycle or supercycle). The equivalent representation of a cycle in LSA is a “timing user”. Grzegorz showed an example sketch of supercycles in SPS, TT10, and TT40, including the injection and extraction kickers.

 

Ralph asked whether the beam process is the same as a machine mode. Mike replied that the beam process is something different, namely a software concept.

 

Another LSA concept is settings; the value of a parameter for a given context. Settings are therefore defined per parameter and per beam context. The setting contains two parts: a target value and a correction.

 

Next, Grzegorz described the LSA concept of architecture. The latter is constructed via a 3-tier approach, containing 3 physical layers: GUI applications i.e. client layer, a business logic which resides in the server tier, and finally a layer of hardware and database. This architecture ensures a central access to database and hardware, comfortable testing and debugging, and an easy web development.  The chosen solution for implementing this architecture is the “spring” framework, which is an open source JAVA application framework. Advantages of the latter are that it provides all needed services as well as a seamless deployment in 2- and 3-tier mode. It also facilitates the integration with many 3rd party products and requires little maintenance effort. A schematic of the architecture was shown, highlighting that JAPC API is used by applications with direct access, whereas the LSA client API is used for accessing all LSA core services. The database access is provided via JDBC or also via JAPC. The architecture is modular, layered, and distributed. Services access the LSA core through the LSA client API. The system is data driven. It is based on a single model and rationalized. This renders possible generic applications and, in particular, an easy reuse of applications for other machines.

 

Recent developments include data concentrators, e.g. BLM concentrators – implemented by Marek. The data from about 4000 BLMs will be published by 20-24 crates. The idea of the data concentration is to create one monitoring process for all crates, and to publish concentrated data.

 

Oliver suggested that each BLM or BPM may have its own synchronized time stamp. He asked how data can be concentrated if they refer to different times and how the times of origin can be identified and separated. Grzegorz replied that the data are distinguished via the cycle number. Oliver emphasized that often the need arises to understand which event triggered others, and that a much finer time resolution than the cycle number would be needed. Bernd Dehning commented that the data actually contain two time stamps, and that Oliver’s question concerns the logging, not the LSA.

 

Stefano asked whether data from one (or a few) specific monitor(s) can be acquired, emphasizing that this question would be highly relevant for collimation. Grzegorz and Mike replied that the concentrator publishes everything, and the applications must do the filtering themselves. Mike hinted that dedicated concentrators could be a possibility for the future.

 

The LSA implementation of LHC timing was presented next. Then, turning to security, Grzegorz commented on the role based access programmed by the FNAL collaborators (LHC @ FNAL – “LAFS”); another important security feature is the management of machine critical settings (MCS). Grzegorz now reported that the MAD-X integration has progressed well. This item allows the simulation and validation of settings before applying them to the hardware, as well as the creation of knobs in MAD.

 

Oliver inquired whether the same interface is used for MAD as for the real data; and whether the data display would look the same for the user, which was answered in the affirmative. Rudiger remarked that this could be important for the interlock system, since it is an easy mistake to send settings to the real machine instead of to MAD-X. The reassuring reply was that the simulations and the real machine are handled by separate applications. Still concerned, Ralph remarked that there might still be a problem if the interfaces look very similar. Grzegorz agreed that the applications could be distinguished by different colors for example.

 

Various types of testing is ongoing. Currently automated black-box testing is employed for business logic and data access objects, while GUI applications are tested manually. The next goal is to set up a testing hardware environment which could be used for automatic testing before each new release, including lab FGCs, MUGEF devices, and BI instruments.

 

Ralph asked for the conclusion of the BLM discussion, in particular whether data from special BLMs can be retrieved at 1 Hz at present. Stefano commented on his experience with a first version of the concentrator, reporting that he had requested a dedicated readout of collimator BPMs from Marek. The requested filter was implemented but for the moment the collimator application still gets an acquisition with all the concentrator data. He and Ralph emphasized that this feature needs to be tested as soon as possible. Marek answered that the concentrator is a general system which is driven by a configuration that can easily be adapted to specific wishes. The concentrator tool has been deployed and working, in the sense that the configured data are concentrated, in the form of one or a few values. Everybody can subscribe to this concentrator. Only two crates are included at the moment. Specifically, Marek clarified that for filtering purposes it is indeed possible to extract data of one device at 1 Hz. Ralph asked when the full BLM system will be connected to the concentrator. Mike replied this would be in the coming month, and Bernd qualified that not a single arc is finished at the moment, and that the BLM connection would follow the installation of instrumentation. Jean-Jacques remarked that he would also be interested in these tests.

 

Roger asked if for the LHC control, LSA covers everything that can interacts with the beam. Jan recalled that some items which were not part of LSA had been mentioned at the start of Grzegorz’ presentation, but these items did not affect the beam. Ralph remarked that the Roman pots should be controlled under collimation, whereas the VELO detector might not be under LSA control (but this detector also is not supposed to harm the beam). Roger inquired whether the kicker settings could be changed in any other way. Oliver asked about possible direct interaction from the Faraday cage. It was remarked that RBAC is available in principle. Rudiger commented that the various beam feedback systems are also interacting with the beam and are not integrated into LSA, but only exchange some information with LSA. Paul remarked that it was a great accomplishment that LSA can be deployed on all CERN machines.

 

LSA Core (Wojciech Sliwinski)

Wojciech presented an overview of the LSA core. His presentation had two parts: 1) LSA operations view, namely how the LSA core helps the operators, and. 2) LHC-driven extensions.

 

The operations data flow starts from the database, and proceeds via optics import, parameters definition, generation of initial settings, trim settings modifications, and exploitation in form of drive settings, to the hardware devices.

 

Jean-Jacques asked whether it was planned to introduce an arrow from the devices to the initial settings. Wojciech replied that, yes, indeed this feature exists and it is called an “expert setting”, used in particular for BI. He next stressed that the same data model is common for all accelerators (LHC, SPS, LEIR), and that the LSA core stores information on every device (FESA and non-FESA). An optics display example was shown. Parameters can be defined in three different ways: manually from SQL scripts, by import from MAD (knobs, physics parameters – momentum tune, chromaticity), and via GUI applications for FESA properties, e.g. for BI settings or collimators.

 

Wojciech recalled the definition of setting. The LSA core contains operational settings for all parameters (from physics to hardware). Only “external parameters” can be sent to the hardware. Settings can be retrieved and trimmed. Settings encompass a target and a correction value. Target values are calculated from the optics. Correction values are entered by the operator or calculated by a tool, for example by the SPS “AutoTrim”. The setting viewer was shown as an example.

 

Various operations on settings are possible: modify, reload, rollback, or copy. A history of all changes is conserved. There is a general access point to all devices/properties (FEA/MUGEF/FGC/GM). Context independent (not multiplexed) settings, e.g. thresholds, have unique values.

 

Four setting categories may be distinguished: functions (magnet strength), discrete settings (constant per context), actual settings (function snapshot), and critical settings. Concerning the latter, the notion of machine critical settings (MCSs) is introduced for the most critical and potentially dangerous devices/settings. These critical settings are complementary to RBAC, and they will be based on digital signature schemes. They are verified on the front-end level (FESA).

 

Settings generation requires the management and scheduling of context types, like SCs, cycles, beam processes.  The initial settings are based on the optics and are derived from top level physics parameters, by propagating the latter to the lower level parameters in the hierarchy. Setting generation is supported for functions, discrete settings, and actual settings. As an example, the generation of actual settings for a new supercycle was shown.

 

A “trim” is a coherent modification of a setting value. Supported value types are functions and scalars. A trim refers to a specific time, context, and parameter, leading via a trim entry to a new value for the setting. The trim also saves the changed settings and sends them to the hardware. All trims are archived in a trim history and can be reverted and rolled back. A typical trim history for the tune was presented for illustration. Trim also involves a settings copy, that is a coherent copy from one context to another. It is possible to copy complete parameter systems, like the tune, e.g. one can select a trim from an arbitrary beam process and copy it to a new destination.

 

The translation from physics to hardware involves three types of rules: make rules, incorporation rules, and link rules. Make rules are used to compute parameter values from a source with automatic propagation to the lower-level parameters. Incorporation rules merge a change, in order to preserve continuity in the supercycle, by propagating a change to the neighboring beam processes. The incorporation rule is defined for each beam process type.

 

Roger asked whether the incorporation rules are hidden, or whether, e.g. in the example of a single point change shown by Wojciech, an operator could change the rule for a certain trim.  Wojciech replied that the operators can take part in the rule implementation. Mike added that for the LHC at some point in the future there will be an application to change the rules. One could define one’s own rule if one wanted to. Ralph commented that the rules must contain a certain set of parameters. Roger recalled that in LEP a number of these rules existed. John commented that more red dots should perhaps be plotted in the example.

 

The last rule is the link rule, which is used to compute missing parts, and which thereby links the settings between two beam-in parts of the supercycle.

 

Commenting on exploitation, Wojciech reported that various tools are available, for example the equipment control applications; read/write of any property; and generic measurement tools.

 

The second part of Wojciech’s presentation addressed the LHC driven extensions. Since LHC is non-cycling, the length of some beam processes is unknown in advance. As a consequence, the new concept of a hypercycle was introduced to organize LHC operations. The hypercycle involves an ordered sequence of supercycles, or a mixture of normal cycles and actual supercycles.  Only one supercycle can be active at any given time. Breakpoints are introduced at which setting values can be trimmed. Creation points are: start, end, and “between”. Incorporated is backward and forward propagation of trims to the preceding and following supercycles. Discrete trims are enabled.

 

Alick asked which protection had been put in place against changing critical settings beyond acceptable range. Wojciech replied that threshold values are defined in the settings. Checks are performed in the database before sending changes to hardware. Alick asked whether there was a read back of the actual change from the hardware. This did not seem to be generally the case at present.

 

Roger asked how one would know that a collimator has reached its target value. Stefano replied that the survey of the collimator position is independent of the TRIM functionality and is implemented in the collimator control application. The same applies to all the hardware controlled by the TRIM: the TRIM is only used to change settings and the monitoring of the corresponding propertied relies on separate tools (equipment state, for example). Ralph further elaborated that there indeed are two independent processes in the collimator control – one is the motion of the motor; the other is the position read-out; the latter can trigger first a warning and then a beam dump.

 

Ralph remarked that for guaranteeing the safety of the machine a global reference for the whole LHC would be desirable, allowing for a quick check of what has changed since the machine safety had been established by beam-based measurements. He asked whether such functionality could be provided by LSA.  Wojciech replied that an “acquire” application exists which could provide this information, and that differences can be displayed.  Jan commented that the actual orbit may be more important than the orbit corrector settings, and that a similar statement would apply for the tune.

 

Ralph next inquired what would happen if a trim does not arrive at the specified target value, quoting LEP as an example where such behavior occurred and sometimes several re-trims proved necessary. Ralph asked how such situation would be treated at the LHC. Mike answered that a lot of power is available within LSA, and that the functions provided are easily modifiable and configurable to any specific need.

 

Paul came back to the question of propagation, which at the moment is always one-directional, from the physics parameters to the hardware. He remarked that sometimes one may want to go in the opposite direction. Grzegorz replied that this other direction had already been asked for by the rf group, and he agreed that it would be nice to have this possibility in the future. Paul strongly encouraged implementing this inverse process, which would also be required to ensure coherency of the different levels. Grzegorz stated that there is a plan to look at this in spring 2008.

 

LSA Database (Chris Roderick)

Chris presented a light-hearted overview of the database. He stressed that LSA stores objects in an organized and efficient manner. The database is of fundamental importance as the applications may change but the data live forever. The LSA database design represents the accelerator domain. It is a complicated data model with 331 indexes, 935 constraints, 45 program units, 4502 lines of code, 3 million settings, and about 8000 parameters for 6 accelerators. The database must provide sequencer configuration, and electrical circuit definitions. 

 

The database does not exist in isolation but it interfaces with LHC layouts, controls configuration, quality assurance, assets managements, and operational data. It indeed is a part of the latter.  Information from the LHC layout database is automatically imported, and so are circuits from the MTF database, as well as FESA devices and measurement definitions from the controls configuration database. The LSA database is based on ORACLE.

 

It is planned to further improve the integration with FESA, to improve security, e.g. row level security integrated wit RBAC and to restrict the database server access to certain IP addresses. Scalability is also important. At present the database shares a server with the measurement and logging databases. In the future it will reside alone on a dedicated high-availability redundant server, which will be installed in March 2008. In summary the LSA database is functioning, and it is now moving to the next level.

 

Roger asked who will manage the new server. Chris replied that this will be IT, which is also responsible for the present server. An online daily backup is already in place. Incremental backups will come in the new version. Paul inquired whether a hot spare exists. Chris reassured him that the LSA database was not the same type of hardware as EDH, and that the calculated risk of failure was extremely small. He mentioned a plan to nevertheless put in place an ORACLE implementation of a standby database, but he stressed that there was no experience with such a system at CERN.

 

Roger commented that 10 years ago at LEP the back up was noticed by operations. Chris reassured him that now LHC will have online back up, and that it will be quite different from LEP.

 

Ralph asked for more details on the constraints for data integrity, which had briefly been alluded to. Chris mentioned several examples, e.g. there can be only one parameter with a given name, only one setting, and only one device per parameter; also the function consistency must be ensured throughout the model. The data is organized for fastest possible access. Ralph then asked what would happen if constraints were violated. Chris answered that an exception will occur. Ralph suggested that the database might for example get corrupted. Chris did not consider this to be a realistic possibility. Ralph reiterated his question for the response in case of an exception. Chris responded that the only problem encountered so far was one due to power cuts, and in that event the database had been restored from the back up.

 

Oliver asked why the MAD-X model was still under development, He had thought that the model was already available. Chris and Mike clarified that what was still being worked on is the importation into the LSA database in a proper format, eliminating the need of rerunning MAD.

 

Replying to a question by Stefano, Chris confirmed that the critical parameters were treated in exactly the same way as any other parameters in the hardware. He pointed out, however, that critical settings can be changed only with appropriate authentication. Stefano commented that with this present implementation there is no special hardware protection of critical settings: the probability for corruption would be the same for critical and non-critical settings.

 

Thorsten Wengler asked about the challenge of concurrent read access. Chris replied that the latter was fully supported by ORACLE and no problem is expected. Answering a question by Bernd, Chris explained that the database table can be protected with a concept similar to the role-based access. Responding to another question, Chris explained that data can be kept as long as necessary, over several years if needed.

 

LSA Deployment for LHC (Mike)

LSA has already been deployed across a variety of CERN accelerators (e.g. LEIR, and SPS transfer lines). A large number of requirements had to met.

 

Mike discussed the LSA core in action, and its complex parameter space. Items shown in blue on his slides are already (at least partially) addressed. These are almost all items. Settings management is required for different beam processes. Optics, ramp & squeeze tests are prepared for all IRs. Ramp voltage and currents were shown as an example. A total of 47 squeeze optics had been generated by ABP. LSA includes the FIDEL import of coefficients describing field harmonics, in particular their decay and snapback. In this case the parameter space is rather small, e.g. the strength and current of the main dipole is propagated to the corrector settings. Tests were done in SM18.

 

Concerning equipment, Mike commented that many different classes of equipment exist, including XPOC, post mortem and timing, etc. A generic equipment state and an equipment monitor are in place.

 

Instrumentation fills a long list. Dedicated GUIs have been implemented for most of the instruments. Fixed displays, logging, SDDS, postmortem, and standard fitting routines are or will be available for all systems.

 

As for measurements, the SDDS format is the solution for storing measurement data. BLMLHC is a large instrumentation system. Work is still in progress here. Concentration was necessary, and so is threshold management, database configuration handling, etc. Other systems comprise the BPMLHC, including functionality and subsystems like capture, CO, PM, FIFO, or XPOC, some part of which is still being worked on, the BQBBQLHC including FFT, continuous FFT, PLL, feedback, and the BCT for which an example display showed the first beam in LHC. Numerous other standalone systems (RADMON, HT chromaticity, abort gap monitor,…) need to be taken care of as well.

 

LAFS is driven by FNAL. The US colleagues also prepare software for SLMs, residual gas monitor, and the Schottky monitor. Displays for the SLM and wire scanner have been provided by Lars. Other examples showed BPM test data and CMS luminosity data.

 

Timing is being dealt with by Julian and his section, including API for events, event tables, telegrams, and client APIs.

 

Jean-Jacques commented that the BOBR/BSTLHC operation falls first under the responsibility of BI (BST: Beam Synchronous Timing; BOBR: Beam synchronous timing Receiver interface for Beam Observation system). CO made the Master (BOBM) hardware and firmware, but  BI develops and runs the programs (i.e. sequence of messages) that this hardware will play. On the receiver (BOBR) side, BI did everything (i.e. HW, firmware and software).

 

The timing for the ramp is arranged via preloaded tables in the timing generator. Other items include MAD, FIDEL, on-line model, aperture model, plotting results of the on-line optics model.  All test results so far have been encouraging. Exploitation is based on an FSM (finite state machine) implementation. Specific applications include orbit (done), luminosity optimization, k modulation, and collimator scans.

 

Mike finally described how to deploy LSA in the LHC and test it over the next six months, first via individual system tests, then HWC shadowing, and finally a series of dry runs, encompassing dry injection, dry sector test, dry multiple sectors, and a full-LHC dry run. HWC shadowing means to work in the shadow of the hardware commissioning – HWC (power converters, settings, Fidel, core magnet control).

 

The individual system deployment is unavoidably fractured. HWC shadowing is ongoing. One recent example is ramping the LHCB dipole. A dry injection is planned for December, over one or two days. The latter should include tests of timing, kickers, rf (pre-pulse), standard facilities, collimators, screens, BPMs, BLMs, WS, and SRMs. 

 

Mike concluded that, to a large extent, the implementation of LSA functionality has been successfully performed. Now moving to the exploitation phase we will harness that functionality to meet the LHC requirements. The requisite software needs to be deployed and tested over the coming months. A staged approach is followed. Individual systems are tested as they become available, recalling again that these tests are then followed by HWC shadowing, and by dry runs. The overall outlook is bright.

 

Paul commented on special applications for wire scanners and other instruments, noticing that the same devices exist throughout the whole CERN complex. He asked whether one could use the same application in all machines. Mike replied that in principle this is correct, and already the case for some instruments, for example the screens (SPS, transfer lines, LHC). Paul restated his comment, saying his message is that the applications should be the same. Jean-Jacques remarked that there are some differences between the machines (e.g. cycles), however, and the required functionality is not always exactly the same either. Nevertheless Bernd mentioned that he would also prefer one application for all machines.

 

Next Meeting

Tuesday November 20th, 14:00

CCC conference room 874/1-011

 

Provisional agenda

 

Minutes of previous meeting

Matters arising

Aperture Meeting:

- Status of design and as-build ring aperture (Stefano/Massimo)

- Transfer line aperture model (Brennan)

- Possible implementation in LSA (Marek/Mike)

- The aperture in the LHC online model (Frank S.)

AOB

 

 

 Reported by Frank