for pulic for researcher English
Feature Story
space >Top >Feature Story >this page
last update: 10/05/12   
space space
  Designing the ultrafast
DAQ for Belle II

KEK’s Belle experiment has played an important role in particle physics
for more than a decade. Researchers are currently hard at work on a
planned upgrade. The new detector, the Belle II, will be hundreds of
times more powerful than the original. One component which is central
to the Belle II detector is the data acquisition system, DAQ. Learn here
what four experts at KEK are planning to beat the many gigabytes of
data estimated to hit their system every second.
space space
Related Link:
Belle II

Related Issue:
New electronics tested for Belle II central drift chamber

Belle II’s new logo and new beginning

SuperKEKB making headway toward higher luminosity

Belle II collaboration meets at KEK

pdf
PDF Version

(22MB)
 
group
space
The team of four at KEK is working on rebuilding the entire Belle II data acquisition system (DAQ) in collaboration with international teams. Top from left: Dr. Takeo Higuchi and Prof. Ryosuke Itoh. Front from left: Prof. Mikihiko Nakao and Prof. Soh Suzuki.
space
The goal of KEK’s planned particle experiment, the Belle II, is to explore new physics by colliding electrons and positrons. To do this, the Belle II physicists surround the particle collision point with layer upon layer of state-of-the-art detector systems. These detectors evaluate every characteristic of the resulting particles, characteristics such as energy, momentum, and charge. The signals from each detector component are on their own, trivial. Dealing with the combination of all the signals together is what is hard. To convert the analog detector signals into usable data, scientists need a system that digitizes the individual detector signal, combines the signals intelligently into physics events, selects interesting events, and stores the interesting events for later analyses. The entire data flow from the acquisition of data from the detectors to the storage of events is the purview of the data acquisition system (DAQ).

The DAQ for Belle II will be extremely complex, owing to the sheer size of data and the frequency of events. “The detectors will trigger 20 thousand times each second, and each trigger produces a signal that is 300 kilobytes. This means that the DAQ must deal with 6 gigabytes of data every second,” explains Prof. Ryosuke Itoh of KEK, the head of the DAQ team for both Belle and Belle II. That data rate is an increase of a factor of about 40 from Belle. “Because we cannot store everything we receive, the DAQ implements a series of processes to cut down the number of events to only those which are physically interesting.”

The Belle II team’s new DAQ scheme will be based on three ideas: smooth transition from Belle to Belle II, unification of subsystems, and scalability. During the upgrade, the DAQ will undergo changes in almost all its components, although the original Belle DAQ design philosophy will remain intact.

The main components of the DAQ are: the front-end digitizer, the unified data link (Belle2Link), the common readout platform (COPPER), the event builder, and the high level trigger (HLT). One noticeable change from the original Belle DAQ is that the digitization will be done by the electronics installed near the detectors, instead of on the COPPER board. Each front-end digitization module will handle 100 detector channels. There are millions of channels in total. The digitized data produced by the individual detectors will be merged and then transmitted through the Belle2Link to the COPPERs. The role of the COPPERs is to receive the merged data and place it on the network so that readout PCs can bring data from all modules together to prepare for event building. Based on the reconstructed physical information of each event, the HLT software then decides if the event is to be stored.
 
daq
space
DAQ controls the online data flow, the flow from the detector interface to data storage. Detector signals are first digitized at the front-end digitizer (FE dig.), merged and fed into the COmmon Pipeline Platforms for Electronics Readout (COPPER). COPPER brings the incoming data on the network for the readout PCs. The event builder reconstructs events for the high level trigger (HLT) which then judges if events are to be stored for offline physics analyses.
 
Standardizing the data structure
The entire process where the DAQ controls the data, from the digitization at the individual detectors until storage, is called the ‘online’ process. Any data handling after the data are stored is called an ‘offline’ process. The most important, unique feature of the Belle II’s DAQ system is that the software used for both online and offline for data handling is built on a common framework called basf2.

space itoh san
space
Prof. Ryosuke Itoh of KEK, the leader of the Belle II DAQ team, stands with Belle DAQ machine that handles HLT. He is responsible for the entire DAQ software.
The basf2 framework is basically a bucket-brigade. Basf2 orchestrates the flow of data through the various parts of the online process, as well as any later offline processes. For component specific operations, plug-in software is written on basf2. “The unified framework allows physicists to learn software development very quickly. The developed components are also modularized and easily upgraded and reused within the common framework, which makes it economical,” says Itoh. “We used the same scheme for Belle, and it is a very nice feature to work with.”

One important change in the software is that Itoh and the offline team have written the DAQ software framework and the offline software components in an object-oriented language, to adopt to more sophisticated data handling scheme. Itoh is currently working on the implementation of basf2 in HLT, COPPER, and readout PCs.

Processing events
in parallel

Itoh is responsible for building the HLT as well. “The HLT must process a large amount of data in short period of time,” says Itoh. “Because each event is isolated, multiple events can be processed simultaneously.” Itoh, together with student Soohyung Lee of Korea University, is now working on implementing parallel processing to increase the number of processing nodes.

In the HLT, there are two levels of parallel processing. The first level enables a computer with a multi-core processor to distribute events among the cores within that computer to process events in parallel. The second level enables the distribution of events among multiple computers each with a multi-core processor. The team has already completed the first level of parallel processing, and is now working on the second level. The key, Itoh says, is careful data handing from computer to computer.
 
data flow
space
The Belle II DAQ software will be based on a common software framework called basf2. This will standardize the data structure throughout the entire data handling processes. Both the online and offline software will be rewritten in an object-oriented language.
 
The low-maintenance, simple event builder
Prior to the data storage, the HLT needs to judge whether the event is physically interesting and worth storing. The event to be stored also needs to include a ‘complete’ set of data associated with the event. For this, the event builder system packs data fragments that belong to the same trigger into an event data. Prof. Soh Suzuki of KEK, responsible for building the event builder, stresses the importance of having a low-maintenance system and using simple coding practices.

yamagata san
space
Prof. Soh Suzuki stands in front of Belle’s original FASTBUS system in the electronics hut. Suzuki is in charge of rebuilding the event builder for Belle II DAQ.
space
"We need a system that has high tolerance to failure," says Suzuki. Generally, having a robust system means having multiple computers running in parallel. However, Suzuki believes that "To reduce the probability that any PC is down, it is best to minimize the number of PCs." If the current scheme of Belle is used, it will end up in a total of 300 PCs just for event building.

Suzuki plans to use a network switch to control the data flow between PCs for event building. The usual difficulty with the network switch is to handle the size of the fully built event packets. At Belle II, each event packet may be as large as 300 kilobytes, and no network switch is commercially available to store such a large amount of data in the buffer to prevent packet loss due to data overflow at output ports. Suzuki utilizes a technique called the 'barrel shifter'. The barrel shifter rotates connections among a series of computers in a cyclic sequence, switching from node to node as necessary. In total, the system requires around 50 inputs for the readout PCs and around 10 outputs for the HLT units.

The key feature in Suzuki's design is that he arranges PCs in mesh to produce required numbers of input and output ports. First, he uses four 4-input, 4-output barrel shifters in two rows to create a 16-input, 16-output barrel shifter. In the next step, he connects four of 16-input, 16-output barrel shifters with sixteen of 4-input, 4-output barrel shifters, to create a 64-input, 64-output barrel shifter. This is now large enough to host the required number of input and output ports. With this design, the number of required PCs is reduced to 48. A prototype of this system was developed last year, proving the idea to be feasible.
 
event builder
space
Left top: a 4-input, 4-output barrel shifter by 1 PC. Left bottom: a 16-input, 16-output barrel shifter by 8 PCs. Using PCs as 4-input, 4-output barrel shifters and arranging them in mesh configuration can create a 64-input, 64-output barrel shifter (right).
 
Suzuki is also working on the event builder software that efficiently handles the high data rate. The Belle II event builder will receive data packets of different size at different timing, and will still have to process six gigabytes every second. At the time of Belle, PCs did not have sufficient processing power to handle the entire data stream in one process, and so had to utilize a multi-process system in which processes were closely tied via inter-process communication techniques, such as shared memory, semaphore, and message queue. The many processes made it difficult to understand the system behavior, especially when something went wrong. It was the recent evolution of the Linux operating system and the dramatic increase of the CPU power that made it possible to build a single process event builder unit. Suzuki built a prototype and examined its performance under the expected data rate of Belle II. "The result shows that the latest generation CPU-based PC has a sufficient power for a 4-input, 4-output barrel shifter unit," says Suzuki.

space higuchi san
space
Dr. Takeo Higuchi of KEK stands in front of the Belle DAQ system in the electronics hut.
Digitizing first
The DAQ team already completed the COmmon Pipeline Platform for Electronics Readout, COPPER, five years ago. COPPER is a general purpose pipeline readout board, whose aim is to reduce dead time. Dead time is the time after each data readout during which the DAQ cannot handle another incoming data.

As a live test, scientists replaced most of the previous system at Belle with COPPER. The system has run successfully ever since. In the Belle system, COPPER has four daughter cards, collectively called FINESSE (Front-end INstrumentation Entity for Sub-detector Specific Electronics). These cards take care of data digitalization. The data then is transferred to a CPU on COPPER. The CPU is like a tiny computer, on which relatively complex procedures can be easily coded. As a result of the COPPER installation, the dead time of the central drift chamber was reduced by 90%.

For the Belle II, the digitizers on FINESSE will be moved to the front-end electronics installed inside or near each detector. In this new design, FINESSE will act as a receiver of data sent through optical fiber from the detector electronics. “If the digitizers remain on COPPER, the number of cables required to transmit the analog signals to FINESSE would be enormous,” explains Dr. Takeo Higuchi who’s been in charge of designing, developing, and implementing COPPER ever since the inception of the COPPER project eight years ago. “For Belle II, each detector has freedom to decide on an optimal digitization scheme.”

COPPER
space
COPPER hosts 4 FINESSEs (left) to receive data from the front-end electronics. A CPU that runs Linux will be installed on the upper right section. The lower right will contain a trigger module slot and generic PMC (PCI Mezzanine Card) slot, as well as an Ethernet port.
space
The front-end digitizer is implemented on a customizable integrated circuit called field-programmable gate array (FPGA). Recent technological improvements in FPGAs allow them to code relatively complex functions. Higuchi will collaborate closely with each detector team to discuss the details of the digitization for each detector.

The most challenging detector for DAQ is the newest detector component, the inner most detector—the pixel detector (PXD). Since the PXD is the closest detector to the collision point, the size of data produced by each event is enormous—1 megabyte. “This is beyond the COPPER’s capability,” says Higuchi. There are currently three options proposed by groups from Germany and KEK. The data from PXD is readout either by a new hardware or a software component, and handled by a new software component. Higuchi coordinates the collaborative efforts between them. “Germany’s hardware solution will be an innovative technology, and may be a big challenge.” The team is now proposing detailed plans that would meet their deadline.

A unified data link
Belle II will have to handle the dramatically increased number of detector channels and amount of data. This demands that the digitization be done already inside or just near the detector. The digitized data produced by these "front-end" digitizers will be sent through a 30-meter long unified data link, Belle2Link. Belle2Link packs and serializes many channels of digitized data into a single optical fiber line. Another DAQ expert Prof. Mikihiko Nakao of KEK, together with the group led by Prof. Zhen-An Liu of Institute of High Energy Physics (IHEP) in China, is working on this data transmission from point A to point B. Building the link, however, is much harder than it may sound, as this is the place where the high trigger rate of the Belle II creates huge amount of data.

space nakao san
space
Prof. Mikihiko Nakao of KEK is in charge of the front-end digitizer and the unified data link.
When an interesting event occurs in the Belle II detector, the trigger logic generates a decision signal. The signal needs to be distributed to all thousands of front-end digitizers within five microseconds. Then, through the Belle2Link, the digitized data are sent to several hundred COPPER boards. In the process, all signals must be synchronized. This effectively means that the Belle2Link must know that the destination COPPER channels are ready at the time of transmission so that no signals are lost. In the current Belle system, to make sure the readiness, the trigger decision signal is allowed only after the entire digitization process of the previous event is completed. This scheme works, for now, as the trigger rate is only 400 hertz. For Belle II, which will have a trigger rate of 30 kilohertz at most, the scheme will end up blocking trigger signals for a large fraction of time. In order to reduce this ‘dead’ time fraction, the trigger and data link system is ‘pipelined’, meaning several triggers are processed at the same time. This requires a mechanism that guarantees transmission of all pipelined triggers without losing them.

The main concept of the Belle2Link is again unification. It unifies the hardware design, firmware, and protocols. The interfaces to the detectors are also designed to handle different types of data in one common framework. “If the readout logic varies detector by detector, it is harder for DAQ to handle the data, so the logic is integrated and all detector electronics use the same logic,” says Nakao. Owing to this, the DAQ will be able to talk to the detector boards directly.

Nakao's main responsibility is the distribution of the trigger signal to the front-end digitizers. In order to guarantee the synchronization of the entire Belle II DAQ system, “we need many more signals on top of the trigger signal itself. More signals mean more cables for transmission. By serializing these related signals into one signal, we can reduce the number of cables required for transmission,” explains Nakao. He is now working on design and development of the circuit boards, and the firmware that will drive the boards.

data link
space
At Belle II, the digitized data from the front-end digitizer is transmitted to the FINESSE receiver via unified data link.
 
The gang of four
For the large amount of demanding work necessary for the DAQ, the team is relatively small. Four researchers at KEK, in collaboration with international teams, are going to rebuild the entire DAQ system for Belle II. The core members are the four who have built, maintained, and continually upgraded the Belle DAQ for the past two decades.

“Everyone here is very capable, and things are moving swiftly,” says Itoh. “We are aiming for a stable, low-maintenance system design. Also, we will need to scale up our system as Belle II’s luminosity advances. The Belle DAQ was an evolving technology, and the Belle II DAQ will also be an evolving technology.” Perfecting this system will require the full abilities of all four experts.
 
space
copyright (c) 2010, HIGH ENERGY ACCELERATOR RESEARCH ORGANIZATION, KEK
1-1 Oho, Tsukuba, Ibaraki 305-0801 Japan
www-admin@kek.jp | Copyright | Send Question |