Architecture

Home Electronics E-mail Notes Meetings Subsystems Search

Front-end electronics architecture

Click on the block diagram and get explanations of the different parts.

Block diagram of front-end electronics and its interface to Trigger, DAQ and Detector Control System.

Detailed descriptions of the front-end architecture can be found in the following LHCb notes:
EDMS715154, Requirements to the L1 front-end electronics.
LHCb-2001-014, Requirements to the L0 front-end electronics.
EDMS 692583, Test, time alignment, calibration and monitoring in the LHCb front-end electronics.

Description of sub-modules.

Front-end system: The front-end system of LHCb is defined as the processing and buffering of all detector signals until they are delivered to the DAQ system on a limited set of readout links. The analog signals are amplified, digitized and buffered during the latency of two trigger levels and finally zero suppressed and formatted for the DAQ system. A selected set of detectors extract and send a reduced set of data to the L0 trigger systems. The DAQ system is defined as the physical implementation of the High Level Trigger (HLT) made as a shared CPU farm. The timing control of the complete front-end system and the delivery of the trigger decisions are performed by a single global readout supervisor over a TTC system using optical fiber's. Control and monitoring of all parameters in the front-end system during debugging, calibration or normal data taking is performed via the Experiment Control System (ECS).

Trigger system: The L0 trigger system is responsible for delivering trigger decisions to enable the front-end system to reduce the amount of data delivered to the HLT system from 40 MHz to a 1MHz event readout rate. The first level trigger (L0) is a constant latency (4us) trigger and the front-end must deliver reduced data to the L0 trigger processors in perfect synchronization with the system clock (bunch crossing clock). Higher level triggers are made as software triggers in the CPU farm.

DAQ: The Data acquisition system is defined as the physical implementation of the High Level Trigger (HLT) using a shared CPU farm. The DAQ system receives data from the front-end systems delivered on gigabit Ethernet links at an event rate of up to 1MHz.. A load balancing and throttle system allows to dynamically control the effective trigger rate via the readout supervisor to match the processing capability of the DAQ system.. The DAQ system is under the control of the global ECS system.

ECS: The Experiment Control System (ECS) is the top level controller of the whole LHCb experiment. All parts of the front-end electronics (and all other sub-systems) must in one way or an other be connected to the ECS system. Loading of front-end parameters and monitoring of the front-end electronics, while the experiment is running, will be under the control of the ECS. The DAQ and trigger systems will also be under the control of the ECS. The ECS will be a highly distributed system with many intelligent local controllers ( e.g. one local controller for the front-end electronics of one sub-detector) which can handle a part of the system with a minimum of communication to the higher levels of the ECS.

TTC: The Trigger and Timing Control (TTC) system is the back-bone of the synchronization and distribution of triggers to the front-end system and is considered a part of the LHCb Timing and Fast Control. The bunch crossing clock, the L0 trigger decisions, event readout types, event destinations in the CPU farm and a set of resets and control signals are distributed to a large set of front-end units using a passive optical fan-out system. The TTC system is responsible for keeping the front-end correctly synchronized to the bunch crossings of the LHC machine and also ensure that all front-end modules in the system remain synchronized to each other.

TTCRX: The Trigger and Timing Control Receiver (TTCRX) is a specialized ASIC which has been implemented to receive the TTC signals distributed via the optical fan-out system. The TTCRX can generate two clocks with programmable phase to align the different sub-systems to the beam crossing and across modules. The L0 trigger decision is delivered as a clock synchronous accept/reject signal together with a bunch ID of the events accepted.

Analog front-end: The analog front-end amplify and shape the small detector signals with a minimum of electronics noise to a level where they can be handled by the following stages of the front-end system. To prevent spill-over between events the detector signal must be shaped to be as short as possible (~ 25ns) and have no significant baseline shifts. In some sub-detectors the analog front-end must drive the conditioned signals from the inside of the detector to electronic modules located in crates outside the detector.

L0 front-end electronics: The L0 front-end electronics receives the amplified analog detector signals and conditions (possibly digitize) it to be stored in the L0 pipeline buffer during the L0 latency. Data belonging to an accepted L0 trigger are extracted from the L0 pipeline and passed to the L0 derandomizer buffer waiting to be transferred to the L1 front-end electronics. Detectors participating in the L0 trigger extracts information of interest before the L0 pipeline and sends it to the L0 trigger processors. This part of the front-end system is for some detectors located inside the detector itself and must drive the L0 accepted events on a set of links to electronic modules located outside the detector. Other sub-detectors have the L0 front-end electronics located in crates outside the detector. Detailed requirements can be found in a requirements document.

L0 trigger data extract: A sub-set of the LHCb detectors (Pile-up veto, Calorimeters, Preshower, SPD, and Muon) must extract a set of L0 trigger data and transmit it to the L0 trigger processors. This must be performed in each bunch crossing to enable the trigger system to determine if each bunch crossing is of possible interest. To limit the required bandwidth of the L0 trigger links the trigger data have to be minimized (compressed) on the L0 electronics card itself. The L0 data must be precisely aligned with the bunch crossings (clock phase and bunch number) to allow correlation's between detector signals to be performed correctly across the whole experiment.

L0 trigger link: L0 trigger data extracted in the L0 front-end electronics are transferred to the L0 trigger processors via L0 trigger optical links based on the radiation hard GOL high speed serializer.

ADC: All data supplied from the sub-detectors to the DAQ system must be in a digital form. In detectors with large dynamic range or binary detectors, the digitization is performed before the L0 pipeline buffer. For detectors with limited range and large number of channels, where a direct digitization is cost prohibitive, the signals are stored in an analog L0 pipeline. The digitization is in this case performed after the L0 accept where one ADC can be shared between several (32) channels.

B-clk: The Bunch clock (B-clk) originates from the LHC machine and determines the bunch crossing frequency. It is distributed to the front-end system by the TTC system and is used as the general system clock of the whole front-end system. The clocking of the first stage of the front-end electronics must be precisely phase aligned with the arrival of the detector signal from a given bunch crossing taking into account the particle flight time, detector response time, delay in analog front-end, etc.

L0 pipeline buffer: The analog or digitized  detector signal is stored in the L0 pipeline buffer during the L0 trigger latency waiting for the accept or reject of event data from each bunch crossing. The L0 buffer is termed a pipeline buffer because it has to store data for a fixed latency. The buffer will in most cases be implemented as a circular buffer using a set of address pointers to save power consumption and to prevent constantly moving sensitive analog data between storage elements.

B-ID: The bunch ID is an identification of the bunch crossing within the LHC bunch structure. The bunch ID can be added to the data stream in the front-end electronics at three different locations: directly at the input of the L0 pipeline buffer, at the input of the L0 derandomizer buffer (default), or at the latest when transferred to the L1 front-end electronics. The tagging of data fragments in the front-end with the B-ID enables the front-end and the DAQ system to verify the correct synchronization between different parts of the front-end system. The earlier the B-ID tag is added to the data flow the better one can verify the correct function of the front-end system. The B-ID is directly controlled by the TTCRX which generates a B-ID counter reset signal aimed at generating correct bunch identifications at the input to the L0 derandomizer. The TTCRX has an internal B-ID counter which is available at its pins for each L0 accept. Alternative representations of the B-ID can be used if it simplifies the front-end electronics or it increases the error checking capabilities. Using the L0 pipeline address as an alternative enables the correct function of the L0 pipeline and its control to be checked continuously (vertex and silicon tracker sub-detectors).

B-Res: The Bunch Reset (B-Res) signal is generated by the TTCRX for each LHC machine cycle and is used to control bunch id counters in the front-end electronics. The bunch reset will be asserted such that a correct B-ID is available at the input of  the L0 derandomizer. The local generation of the bunch count reset in the different parts of the front-end electronics must be aligned with real bunch collisions to insure a correct bunch identification ( TTCRX has programmable delays for this).

L0-ID: The L0 Event ID is an identification of the event number in the sequence of positive L0 triggers. The L0 event ID does not have a specified range. At lower levels of the front-end electronics it may be represented by a 4-8 bit number that must be expanded to 32 bits at higher levels in the front-end system. The L0 event  ID can be added to the event data at the input of the L0 derandomizer or at the latest at the input of the L1 front-end electronics. Each L0 trigger have a unique pair of B-ID and L0-ID values and any data found in the front-end or DAQ system not having a correct pair implies that an error has occurred.

L0-E-Res: The L0 Event Reset signal is generated by the TTCRX and is used to control L0 event id counters in the front-end electronics.

L0 derandomizer: At the reception of a L0 trigger accept the data stored in the related L0 pipeline buffer location must be transferred into the L0 derandomizer buffer. A sufficient amount of data per detector channel must be extracted into the derandomizer to enable later stages of the front-end electronics or the DAQ to correctly identify the value and origin of the signal. For detectors with limited dynamic range and shaping times shorter than the bunch crossing period one sample per channel is sufficient. For high precision detectors or detectors with drift or shaping times longer than the bunch crossing period it may be required to extract several samples per channel, or alternatively extract on-line a single value from a few consecutive bunch crossing periods. The derandomizer must have a depth of 16 events and be read out sufficiently fast (900ns) to keep up with the L0 trigger accept rate. The L0 derandomizer buffer will in several implementations be a part of the same physical memory as the L0 pipeline under the control of a set of read and write pointers.

L0 MUX: At the output of the L0 derandomizer buffer the data rate have been significantly reduced by the L0 trigger. Data from several channels (32) can now be multiplexed to share a link to the L1 front-end electronics and/or an ADC. The multiplexing must be performed such that the total read-out time does not introduce a risk of overflowing the L0 derandomizer.

L0 trigger processor: The L0 trigger processors are responsible for determining if an event of interest have occurred in the experiment and signal this to the L0 decision unit. The processing of data must be performed in strict synchronization with the front-end electronics to be capable of extracting the correct event data from the L0 pipeline buffers.

L0 decision unit: The L0 trigger decision unit assembles the event parameters from the different L0 trigger processors and makes a decision if the event is of interest and should be extracted from the L0 pipeline buffers. The trigger decision is transmitted to the Readout supervisor which performs the final acceptance of events and distributes it to the front-end electronics.

L0 readout link: After proper acceptance by the L0 trigger, and derandomization by the L0 derandomizers, event data can be multiplexed (normally 32 channels) and transmitted over the L0 readout links to the L1 front-end electronics in the counting house. The readout links are in all cases, except for the Vertex detector, optical links based on the radiation hard GOL serializer. For the Vertex detector a multiplexed analog link on differential pairs is used to transport data to the L1 front-end electronics.

L1 front-end electronics: The L1 front-end electronics receives event data accepted by the L0 trigger and performs a first basic verification of the collected data. Accepted event data are passed to a zero-suppression unit followed by event formatting to be sent to the DAQ system. All sub-detectors, except the RICH detector uses the common TELL1 module to implement the required functions of the L1 front-end electronics. Detailed requirements of the L1 front-end electronics can be found in a requirements document.

L0 throttle: To prevent buffer overflows in the readout system after the L0 derandomizers a throttle mechanism is used. When the L0 throttle is asserted the Readout supervisor will translate all L0 trigger accepts into L0 trigger rejects and thereby stop the flow of event data into the L1 front-end electronics. The L0 throttle network will have a certain delay before the readout supervisor will actually enforce L0 trigger rejects. The L0 throttle network is considered part of the TFC (Timing and Fast Control) system which includes features for partitioning via a set of programmable routing switches.

Zero-suppression: Before event data is passed to the DAQ system it must be properly zero-suppressed to limit the loading of the data links and the DAQ system. Data must also be properly organized in self-describing data structures which are easy handled by the HLT trigger system. Consistency checks on data must be performed to verify that the different event fragments originate from the same event. If any inconsistency in the data is observed the event must be flagged as being corrupted.

Output buffer: After zero-suppression, the event size will have large fluctuations and a new level of derandomization will be required before concentrating the data and sending it to the HLT system. When this buffer becomes full the zero-suppression can be stopped and the L1 derandomizer will be forced to store event data until the L0 throttle is finally asserted.

Link interface: A LHCb standardized quad Gigabit Ethernet (GBE) plug-in module interface the L1 front-end electronics to the readout network of the DAQ system. This link interface has 4 GBE ports to obtain sufficient bandwidth to the DAQ system.

DAQ link: The data links to the DAQ system interface the front-end electronics directly to a commercial readout network based on the Gigabit Ethernet protocol. Multiple events are sent in one Multi-Event Package (MEP) to reduce protocol overhead.

Readout supervisor: The readout supervisor has a vital role in collecting L0 trigger decisions from the L0 trigger system and only pass trigger accepts that do not risk to overflow any part of the front-end and DAQ system. Triggers must be closely monitored and throttling must be applied based on a well established functional model of the front-end system. The L0 trigger must be passed to the TTC driver in a completely synchronous manner with a minimum latency. Load balancing of the DAQ CPU farm and transmitting to all L1 front-ends the destination of event data is handled with a mechanism in the readout supervisor. The Readout supervisor is also responsible for generating front-end resets and generate special trigger sequences for test and monitoring.

This page was last modified by JC on May 16, 2006. This page has been accessed Hit Counter number of times.