Project

General

Profile

Actions

PC 110808 SSI proposal

Below is mainly big task possibilities. Apologies if you expected this
sooner; Chris Tunnell has been away at conference, we wanted to merge ideas before sending (partially successful, hope it is not too disjoint).

Big task

The main thing that springs to mind as being a major and
critical path task that is currently the thing that keeps me up at
night is 'online data quality' (though sometimes within MICE jargon
people call this 'online reconstruction'). The majority of it is the
type of bookkeeping and optimization that programmers live for.

What

'online data quality' is a standard required feature for
particle physics experiment software and means that we need to be able
to assess the quality of the data we're taking in real time. This is
in contrast to 'offline data quality' where people analyze the data on
their home machine. There are three use cases for looking at plots:
live in the control room, short term history (data quality shifts to
assess at the end of the week what runs are good), and offline at home
when somebody is trying to write a paper.

Four subtasks:

Subtask data flow:

At the moment data is held in a buffer in memory. Should be upgraded to store data in a database to support multiple IO requests, large data size.

Subtask histogramming:

Typically we can monitor detector performance by making histograms of certain elements in the data structure. The task is to make an application that can access the data structure and add certain user-defined elements into 1D or 2D histograms. We would expect to use PyROOT or possibly ROOT for histogramming.

Subtask UI:

The generated histograms should be visible from within the control room and (bonus points) externally. Make a web browser interface for looking at histos (potentially django)? Updates every few seconds...

Subtask distributed computing (optional):

The code that manipulates the data (e.g. pattern recognition, etc) is potentially not going to be quick enough to be handled by a single core. The cheapest way to fix this problem is to distribute over several cores. Makes the overall code more extensible and re-usable. We have a fall back here, which is to cut out some of the reconstruction code - but this may be undesirable.

Who:

Experience with python is essiential. PyROOT, C++, django, CouchDB, distributed computing e.g. celery are all a bonus.

When:

The drop dead date for this assignment is February/March at
the moment; if it looks like we'll miss that date then we'll start
reassigning people. It shouldn't even require 1 FTE for that
timescale so we're not talking about much work. That date depends on
when we receive a shipment of a detector.

However, there is a test in October is would be nice to have a
prototype ready if at all possible. It would buy us a lot of street
cred.

How:

We imagine that he would be communicating closely with (probably) Chris Tunnell so he could offer guidance.

Using the existing API, we also think the way of approaching this is to
either use a map-reduce package (we think this may be tough since most
map-reduce things aren't meant for live feeds) OR the following setup
that we think is minimal code and most maintainable:

Input gets fed into couchDB. The parallel data processing step occurs
by sending the input within couchdb to a distributed task queue. The
results are returned to the database. Each 'plot' is a single
threaded application on a different machine where that machine has
discovered plots within some directory. Once the plot (png and eps)
are made, they are stored 'somewhere'. There is a django web
interface that displays the plots and allows one to tweak the time
interval of the plots (and maybe ranges of axises etc).

Updated by Rogers, Chris about 12 years ago ยท 1 revisions