Project

General

Profile

Bug #1663

TRefArray Reference number limit

Added by Greis, Jan about 7 years ago. Updated about 7 years ago.

Status:
Open
Priority:
Normal
Assignee:
Category:
-
Target version:
-
Start date:
31 March 2015
Due date:
% Done:

0%

Estimated time:
Workflow:
New Issue

Description

Apparently the number of TRefArray references in ROOT that can be held within one process ID is limited to 25 bit (16777215), see https://root.cern.ch/root/html/src/TRefArrayIter.cxx.html line 223
If that limit is exceeded, ROOT tries to switch to a new TProcessID. This fails, producing errors like

TRefArray::TRefArray::AddAtAndExpand:0: RuntimeWarning: The ProcessID for the 0xcb3819b0 has been switched to ProcessID1/82330532-d4f3-11e4-9717-0101007fbeef:1. There are too many referenced objects.
Error in <TRefArray::AddAtAndExpand>: The object at 0xcb00a9a0 is not registered in the process the TRefArray points to (pid = ProcessID1/82330532-d4f3-11e4-9717-0101007fbeef)

followed by a segfault. It is possible that this can be worked around by monitoring the number of references and manually switching TProcessID in between spills, so that there won't be references to separate process ids within the same object.

To reproduce the error:
  • take a 0.9.4 installation
  • Add the following lines to any used mapper:
    #include "TProcessID.h" 
    
    int TPID_count = TProcessID::GetObjectCount();
    double TPID_percent = (TPID_count / 16777215.0)*100;
    std::cerr << "TProcessID Count: " << TProcessID::GetObjectCount()  << " (" << TPID_percent << "%)\n";
    
  • make the following changes to bin/Global/datacard_200MeV_mu_plus.py:
    • pencilbeam
    • increase reference particle energy to about 300 to make sure that most particles go all the way through the beamline
  • run python simulate_global.py --configuration_file datacard_200MeV_mu_plus.py

It might take longer than 10,000 particles to reach the limit (my mappers that aren't in the release branch seem to increase the number by about 30% and I get the segfault at around 9,100) but the output from the above code should demonstrate that this is a situation that should be dealt with.

#1

Updated by Greis, Jan about 7 years ago

A related question would be why it is that the count accumulates across spills. I don't understand the data handling of MAUS well enough to know whether this is a bug or by design.

Also available in: Atom PDF