Feature #691
Online Data Quality (SSI collaborative effort)
100%
Description
This ticket will contain information related to performing online data quality (formerly called 'online reconstruction') within the MAUS framework. This is a collaborative effort with the SSI.
Updated by Jackson, Mike over 12 years ago
Tasks completed this week,
- Updated SSI project initiation doc, sent, updated after comments from Chris and Chris.
- Subscribed to e-mail lists.
- Read IPAC 2011 paper.
- Registered with MICE web site.
- Explored web site and noted project features.
- Downloaded and (eventually) built release version on Scientific Linux 5 server and Ubuntu 11.04 VM on Windows 7.
- Registered with LaunchPad and checked out repository.
- Read user doc on wiki and noted down suggestions.
- Created wiki page for collaboration,
Updated by Tunnell, Christopher over 12 years ago
- Subject changed from Online Data Quality to Online Data Quality (SSI collaborative effort)
Updated by Jackson, Mike about 12 years ago
Tasks completed this week,
- Read developer doc on wiki and noted comments from sustainability perspective.
- Built MAUS from BZR checkout.
- Ran all bin/ applications.
- Using these as a starting point, explored the map-reduce framework and worker code.
- Formed questions for F2F.
- Requested new EPCC "maus" VM.
Updated by Tunnell, Christopher about 12 years ago
Agenda for F2F between Jackson and Tunnell:
-Agenda bashing.
-Discuss requirements in detail.
-Workplan, and key dates.
-Review and complete project initiation doc.
-Discuss FermiLab travel.
-AOB
Notes to follow....
Updated by Tunnell, Christopher about 12 years ago
Updated by Tunnell, Christopher about 12 years ago
Example pyROOT code
import ROOT h1 = ROOT.TH1F("h1", "title", 100, 0, 10) # bins, x_min, x_max for x in range(1,3): h1.Fill(x) c1 = ROOT.TCanvas() h1.Draw() c1.Print("blah.png")
Updated by Jackson, Mike about 12 years ago
Notes from F2F:
- Requirements in detail.
- Discussed design in detail. Design will be added to MAUS SSI Component Design.
- Workplan, key dates
- Work-plan, based on above, agreed. Will add plan to MAUS SSI page
- Review and complete project initiation doc
- Done. Will attach to this ticket when contents have been agreed.
- FermiLab travel. Important meetings include:
- Oct - FermiLab - Linda pop in-out, and others. Work together and training.
- Feb 08-11/12 RAL - MICE collaboration workshops - to present to group at large. Dates TBC
- May - CHEP, NY - presentation to wider community. Publicise SSI. Work towards MAUS paper by Oct 2012. Diagrams. Evolve into follow-up paper.
- June/July - Glasgow. Depending on Feb reception and worthwhile things to conclude. Depends on Feb-June changes.
- Would be most useful for Mike to attend RAL and CHEP meetings.
Updated by Jackson, Mike about 12 years ago
- F2F with Chris T.
- Completed MAUS/SSI PID.
- Wrote up design at MAUS SSI Component Design
- Started writing up Sustainability evaluation
Updated by Jackson, Mike about 12 years ago
Tasks completed this week:
- Spawned tickets for new tasks.
- Wrote up MAUS SSI Evaluation - let me know if this should become one ticket or many (and there would be many). I'm happy to make some of these changes myself if that helps?
- Checked out and built latest version on maus.epcc.ed.ac.uk VM.
- Pushed my branch to LaunchPad.
- #702 - Wrote ReducePyMatplotlibHistogram and OutputPyImage worker to save images to files.
Updated by Jackson, Mike about 12 years ago
Tasks completed this week: none (at a conference).
For next week plan to
- Track down character encoding issues for non-PDFs in Matplotlib histogram reducer (#702).
- Add tests.
- Start web-based histogram presenter
Updated by Jackson, Mike about 12 years ago
Jackson, Mike wrote:
- Track down character encoding issues for non-PDFs in Matplotlib histogram reducer (#702).
Non-PS, not non-PDF.
Updated by Jackson, Mike about 12 years ago
Tasks this week:
- Added plan, timetable, risks to MAUS SSI page
- #702 - resolved character encoding issues for histogram reducer and wrote tests.
- #703 - started.
Updated by Jackson, Mike about 12 years ago
Tasks this week:
- Less than expected due to illness :-(
- #702 - worked on refactoring histogram reducer after feedback.
Next week's plans:
Updated by Tunnell, Christopher about 12 years ago
@next1: I think it's essentially done except minor feedback. If it takes more than a day, there are bigger fish to fry.
@next2: Want us to host it? Or just run the demo server?
@next3: sweet. The IRC channel is a great way to get support. I can remind myself of useful tips they told me about birth/death or share my <5-10 hours of playing knowledge if need be.
Updated by Jackson, Mike about 12 years ago
Updated by Tunnell, Christopher about 12 years ago
Wrote up MAUS SSI Evaluation - let me know if this should become one ticket or many (and there would be many). I'm happy to make some of these changes myself if that helps?
All of the changes you describe sound very reasonable, so any of them you don't mind implementing would be greatly appreciated. We monitor wiki edits and such, so we would complain if you did something we didn't like so that shouldn't be a concern to you.
Otherwise, I'll try to implement them sometime in the next week.
Updated by Jackson, Mike about 12 years ago
Tasks this week:
Misc:- Drafted text for SSI's "who we work with" pages.
- Updated maus-apps HTML pages to be full HTML pages, not fragments.
- Quick look at histogram examples in CT's e-mail and added to MAUSThirdPartyOnlineMonitoring
- Can this be closed?
- Looked at CT's Celery code.
- Fixed Pylint failures and failure caused by "celery" package name.
- Bogged down in building MAUS after updating my branch to the current release.
- Looked at dynamic configuration of worker nodes.
- Read intro to CouchDB.
- yum install fine for CouchDB 0.1.1
- Bogged down in building CouchDB 1.1.1 due to myriad open source dependency woes!
Updated by Tunnell, Christopher about 12 years ago
Would you suggest something other than CouchDB? I just know it allows for local clones and fancy stuff, so have had some good experience before. But haven't tried MongoDB or anything. At first and since we have low load, really anything for storing JSON documents that allows concurrent read/write would work.
Updated by Jackson, Mike about 12 years ago
I'll have a look at MongoDB too. Changes to Go.py can be done in such a way as to make the specific JSON document store used pluggable.
Updated by Jackson, Mike about 12 years ago
Tasks this week:
- Prepared SSI "who we work with" guide for web site. Awaiting review by our web site editor.
- Configured EPCC maus2 VM.
- #704
- Cleaned up Celery loader and task files - better error handling in loader and use of logging, not print, in task files.
- Successfully tested Celery with Celery workers and clients on different servers.
- #705
- Reinstalled CouchDB - VM restart caused issues.
- Worked through examples and altered Go.py to successfully use it.
Updated by Jackson, Mike about 12 years ago
SSI "who do we work with" page about MICE/MAUS/SSI is now live, http://www.software.ac.uk/who-do-we-work/cooling-muons-particle-physics
Updated by Jackson, Mike about 12 years ago
Tasks this week:
SSI blog post on Celery.
#705 Data cache
- Downloaded and installed MongoDB and pymongo.
- Wrote classes for dictionary-backed in-memory, CouchDB, and MongoDB data stores plus unit test classes.
- Changed
Go.py
to use these classes - configuration parameter determines class to use soGo.py
has no knowledge of CouchDB or MongoDB. - Wrote MAUSDocumentCacheConfiguration.
#704 Distributed task queue
- Implemented another MAUS task. This creates one instance of each map worker and
birth
these when the Celery worker is started. It takes a spill and a list of worker names to execute and passes the spill through these.MapPyGroup' has a new @get_worker_names
method to allowGo.py
to determine the worker names to send to the Celery worker. Celery worker andGo.py
still need to have the same configuration but at least the task pipelines don't need to be specified on the workers in a Python doc. We can see during live testing if this approach is problematic in any way. Go.py
changed to have separate classes for the different types of dataflow, and other changes to make it more modular.- Now supports three multi-process flags (for use with Celery and CouchDB or MongoDB):
multi_process
- run full dataflow, using Celery workers for transform tasks and storing transformed spills in database prior to merge application.multi_process_input_transform
- run input-transform part of dataflow only, storing transformed spills in database for later retrieval.multi_process_merge_output
- run merge-output part of dataflow only, pulling in spills from database.
- Due to the above
tests/py_unit/test_core_go.py
was changed due to movingget_possible_dataflows
function intoGo
class. - Still to address error handling, especially for Celery and how merge-output would behave for cases where spills are continuously being deposited into the database (at present it just get a list of all the spills currently there, merge-outputs them then exits).
Commit 693
Updated by Jackson, Mike almost 12 years ago
- Workflow set to New Issue
- Experimented with ipython and PyROOT graphing APIs.
- Fixed ReducePyTOFPlot to output data in a format consistent with OutputPyImage.
- Pulled out ReducePyROOTHistogram super-class from ReducePyTOFPlot.
Updated by Jackson, Mike almost 12 years ago
Multiple documents from MAUS.InputCppDAQData() have the same spill_num values e.g. using 03386.000 yields:
{"daq_data":null,"daq_event_type":"start_of_run","spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_run","spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_run","spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_run","spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_burst","spill_num":0} {"daq_data":{"V830": ... }, ... "spill_num":0} {"daq_data":null,"daq_event_type":"end_of_burst","spill_num":0} {"daq_data":null,"daq_event_type":"start_of_burst","spill_num":1} ...
This complicates using "spill_num" as a unique ID (both for logging information in Celery workers as to the current spill being processed) and, more importantly if wanting to use "spill_num" as a unique key in the database. So what should be the unique key?
We could pass the spill ID from Go.py (where a spill ID of N means that this is the Nth spill received by Go.py from an input worker) to a Celery worker (see update to #704 where this is done for logging) and have it return this when it completes. This could then be used to index the spill in the database. There's no need to tamper with the actual spill doc since, on insertion into the database, spill docs are embedded in a JSON doc anyway.
Updated by Jackson, Mike almost 12 years ago
Tasks this week:
#702, matplotlib histogram reducer- 706 ReducePyMatplotlibHistogram and ReducePyHistogramTDCADCCounts now use ErrorHandler.
- 707 Updated _update_histogram comments in above reducers.
- 705 Moved mauscelery and docstore to common_py. Changed configure to reflect change in paths to these directories. Removed no-longer-needed Task classes from maustasks.py. Replaced prints in maustasks.py with logger.info calls.
- 711 Celery task takes optional client ID and spill IDs for logging. Go.py uses hostname and PID for client ID and spill count for spill ID"
- 704 Merged in lp:~durga/maus/dev2 revision 618 ReducePyTOFPlot changes.
- 708 ReducePyROOTHistogram and ReducePyTOFPlots now use ErrorHandler.
- 709 Go.py now sends {"END_OF_RUN":"END_OF_RUN"} document into reducers when no more inputs. ReducePyHistogramTDCADCCounts updated to just output an empty document in response. ReducePyTOFPlot updated to output final histogram documents in response.
Misc:
- Added wiki page - Reducers and histograms.
- 710 Aligned with lp:maus/merge
I tried making https://launchpad.net/maus-apps part of the MICE project group using the LaunchPad usage page but there is no "Change Details" option. Maybe because I set the maintainer to "MAUS Maintainers"?
Updated by Rogers, Chris almost 12 years ago
I tried making https://launchpad.net/maus-apps part of the MICE project group using the LaunchPad usage page but there is no "Change Details" option. Maybe because I
set the maintainer to "MAUS Maintainers"?
Probably correct - I just added it to the project group...
Updated by Jackson, Mike almost 12 years ago
- From #846, added init.py to src/common_py/docstore and changed Celery files to that configure does not explicitly need to add src/common_py/docstore or src/common_py/maus_celery to Python paths.
- Need to update the following pages when merged to add "docstore." prefixes as appropriate to uses of "MongoDBDocumentStore" and "CouchDBDocumentStore" in
Updated by Jackson, Mike almost 12 years ago
Various bug fixes, extensions, additions:
- Aligned with current lp:maus. Fixed bug in Go.py input-transform loop. Now exits only when all inputs have been read and all workers have returned. 714.
- Made exception mausloader.py exception message consistent. Added test_mausloader.py. 715
- Added test classes for ReducePyMatplotlibHistogram and ReducePyROOTHistogram. 716.
- Removed special case disabling of MapPyTOFPlot and MapCppSimulation from mausloader.py. init forces PyROOT batch mode on. 717.
- Updated test_mausloader.py so tests pass if test class is run at Python command-line or via nosetests. Issues arose due to Celery autonaming tasks based on module name. 718.
- Updated mausloader.py, celeryconfig.py, test_mausloader.py to remove support for MAUS_CONFIG_FILE Celery property. User can use ConfigurationDefaults.configuration_file property". 719.
- Fixed PyLint errors in test_mausloader.py. 720.
- Changed MapPyGroup.get_worker_names to recurse over sub-groups. Fixed typo in constructor. Fixed typo in append. Removed unittest main method. Fixed pylint errorsd. Added test_MapPyGroup.py. 721.
Updated by Jackson, Mike almost 12 years ago
Updated by Jackson, Mike almost 12 years ago
Updated by Jackson, Mike almost 12 years ago
Distributed task queue #704
- To 750. Added more try-except blocks to prevent non-Pickleable error leakage. Go.py uses timeouts on broadcasts. mausprocess sets ErrorHandler to
'raise'
so exceptions aren't gulped but return to client.
Data cache #705
- To 752. Added simple client to delete MongoDB collection or database. Added support for MongoDB disconnect to document store API.
Web-based interface for live execution monitoring #706
- To 17 Changed to support Apache 2.0/mod_wsgi use of Django. Documented Apache 2.2, mod_wsgi, ImageMagick, MAUS web front end deployment and configuration MAUSDjangoApache.
CM32 presentation
Updated by Jackson, Mike almost 12 years ago
Changes up to 777:
(see ticket updates for full details)
- Distributed task queue #704 - client-Celery worker MAUS version checks.
- Data cache #705 - timestamped spill insertion/retrieval.
- Modularise Go.py #855
- Input-Transform loop detects changes in run number and, if changed, instructs Celery to reconfigure itself (death transforms, then birth new ones)
- Merge-Output loop keeps track of timestamp of last spill read, detects change in run number and, if changed, deaths and rebirths merger and outputer.
These are the work-in-progress algorithms used based on need to handle end of runs in Go.py (and delegate multi_process.py):
Input-Transform:
CLEAR document store current run number = -1 WHILE input is available GET next spill IF spill has a run number new run number == spill run number ELSE new run number = 0 IF (new run number != current run number): # We've changed run IF spill is NOT a "start_of_run" spill WARN user of missing start_of_run spill WAIT for current Celery tasks to complete WRITE result spills to document store run number = new run number CONFIGURE Celery - DEATH current transforms, BIRTH new transforms IF spill run number == 0: # Spill didn't have run number so add it to the spill spill run number = run number TRANSFORM spill using Celery WRITE result spill to document store DEATH Celery worker transforms
If there is no initial start_of_run spill (or no spill_num in a spill) in the input stream (as can occur when using simple_histogram_example.py or simulate_mice.py) then new run number will be 0, run number will be -1 and a Celery configuration will be done before the first spill is transformed.
Spills are inserted by Input-Transform in the order of their return from Celery workers. This may not be in synch with the order in which they were originally read from the Input.
Merge-Output:
run number = -1 last time = 01/01/1970 while TRUE: # (this nasty will be addressed) READ spills added since last time from document store IF spill run number != run number: Send END_OF_RUN block to mergers DEATH merger and outputter BIRTH merger and outputter run number = spill run number MERGE and OUTPUT spill Send END_OF_RUN block to mergers DEATH merger and outputter
The Input-Transform policy of waiting for a run to finish before a new run starts processing mean that all spills from run N-1 are guaranteed to have a timestamp < spills from run N in the document store.
Documents in the database are of form {"_id":ID, "date":DATE, "doc":DOC}
where:
- ID: index of this document in the chain of those successfuly transformed. It has no significance beyond being unique in an execution of Go.py and is not equal to the spill_number.
- DATE: Python timestamp of when the document was added.
- DOC: spill document.
At present a single collection in the database is used. This will be changed so that spills from a specific process have their own collection.
Questions, comments on the above welcome.
At our last F2F it was mentioned that configuration can change between runs. Go.py is intended to handle multiple runs. But Go.py (single-threaded or multi-process) does not allow for multiple configurations - it sets up the configuration when it starts based on ConfigurationDefaults.py and command-line parameters. So it would need a refactoring of other parts of Go.py to support this which, given my other TODOs I wouldn't have time to do.
Updated by Jackson, Mike almost 12 years ago
Dump of information...types of spills:
- Simulation:
./bin/examples/simulate_mice_histogram.py Input is {} Data is added by the transforms. So run ID is irrelevant - there's just one "run"
- DAQ offline data:
./online_dq_celery_joy.py daq_data_file = '02873.003' WARNING : The first event is not a START_OF_RUN. Spill count and Event count not accurate {"daq_data":null,"daq_event_type":"end_of_burst","run_num":2873,"spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":2873,"spill_num":0} {... "run_num":2873,"spill_num":0, "daq_event_type":"physics_event" ... } {"daq_data":null,"daq_event_type":"end_of_burst","run_num":2873,"spill_num":0} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":2873,"spill_num":1} {... "run_num":2873,"spill_num":1, "daq_event_type":"physics_event" ... } {"daq_data":null,"daq_event_type":"end_of_burst","run_num":2873,"spill_num":1} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":2873,"spill_num":2} ... ... ... {"daq_data":null,"daq_event_type":"end_of_burst","run_num":2873,"spill_num":6} {"daq_data":null,"daq_event_type":"end_of_run","run_num":2873,"spill_num":6} {"daq_data":null,"daq_event_type":"end_of_run","run_num":2873,"spill_num":6} {"daq_data":null,"daq_event_type":"end_of_run","run_num":2873,"spill_num":6} {"daq_data":null,"daq_event_type":"end_of_run","run_num":2873,"spill_num":6} ++++ End of file 02873.003 ++++ daq_data_file = '03397.003' {"daq_data":null,"daq_event_type":"start_of_run","run_num":3397,"spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_run","run_num":3397,"spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_run","run_num":3397,"spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_run","run_num":3397,"spill_num":-1} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":3397,"spill_num":0 {... "run_num":3397,"spill_num":0 ... } # SPILL {"daq_data":null,"daq_event_type":"end_of_burst","run_num":3397,"spill_num":0} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":3397,"spill_num":1} *** InputCppDAQData::getCurEvent() : Unknown exception occurred. DAQ Event skipped! {"daq_data":{},"daq_event_type":"physics_event","errors":{"bad_data_input":"InputCppDAQData says: Unknown exception occurred. Phys. Event 1 skipped!"},"run_num":3397,"spill_num":1} {"daq_data":null,"daq_event_type":"end_of_burst","run_num":3397,"spill_num":1} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":3397,"spill_num":2} {... "run_num":3397,"spill_num":2,"daq_event_type":"physics_event" ... } {"daq_data":null,"daq_event_type":"end_of_burst","run_num":3397,"spill_num":2} {"daq_data":null,"daq_event_type":"start_of_burst","run_num":3397,"spill_num":3} {... "run_num":3397,"spill_num":3,"daq_event_type":"physics_event" ... } {"daq_data":null,"daq_event_type":"end_of_burst","run_num":3397,"spill_num":3} ... ... ... No end_of_burst for "spill_num":5 {"daq_data":null,"daq_event_type":"end_of_run","run_num":3397,"spill_num":5} {"daq_data":null,"daq_event_type":"end_of_run","run_num":3397,"spill_num":5} {"daq_data":null,"daq_event_type":"end_of_run","run_num":3397,"spill_num":5} {"daq_data":null,"daq_event_type":"end_of_run","run_num":3397,"spill_num":5}
Updated by Jackson, Mike almost 12 years ago
Last week's tasks:
General
core_builder
now copies all Python test classes - allows use of shared test classes.- Complies with -N,0,+N run numbering scheme cited in #839.
- Input-Transform loop now keeps track of spills input and spills processed. Changed messages.
- Added CTRL-C handler to Merge-Output so it deaths merger and outputter first.
- Both loops keep count of spills input, processed, output.
Distributed task queue #704
- Removed spill ID from Celery execute_transform function as it doesn't mean much. spill_num, if present, will be in the spill
Data cache #705
- Added bin/utilities/summarise_mongodb.py, which summarises collection names, sizes and numbers of documents in MongoDB
- Changed DocumentStore API to explicity support notion of a named collection in the document store.
Allows use of process ID and partitioning of spills from runs or jobs etc.
Web-based interface for live execution monitoring #706
- OutputPyImage directory now handles case where directory is None.
- Completed #849
- ConfigurationDefaults sets image_directory to MAUS_WEB_MEDIA_RAW if this is set.
- maus-apps configure script sets up env.sh to set MAUS_WEB_MEDIA_RAW.
Updated by Jackson, Mike over 11 years ago
Data cache #705
- Changed so can have death-then-birth workers without changing configuration. New approach to handling how sub-processes detect if they've already been updated via sending PIDs from main process." src/ tests/
- New approach to handling how Celery sub-processes detect if they've already been updated via sending PIDs from main process.
Web-based interface for live execution monitoring #706
- Removed need for install.sh by rewriting configure.
- configure generates two types of env.sh depending on whether MAUS Python is being used or not.
- Replaced print with logging.
- Rewrote MAUSWebFrontEndDeploy for cases where user is using Django web server or Apache and is using MAUS Python or a standalone Python.
Miscellaneous
- Merged with lp:maus/merge AM 06/03/12
- Can now use auto-generated or explicitly-provided document store collection names.
- Defined exceptions for use in the multi-processing framework and added try-except blocks where appropriate.
- Pulled out multi_process.py into input_transform.py and merge_output.py. Pulled out utility methods into utilities.py. Renamings, recommentings, more tests.
Control room:
- Tracked down Apache 2/mod_wsgi deployment bug in control room. Was a permissions error.
- Deployed last nights versions of my development branch and maus-apps and tested, added a README.
Updated by Jackson, Mike over 11 years ago
Tasks this week:
Data cache #705
- MongoDBDocumentStore now indexes on "date" to avoid "too much data for sort" problems and allows faster queries for date order.
Framework:
- merge_output now looks for end_of_run and if one encountered saves it. When run changes it passes this to mergers. Mergers changed to output final histogram on end_of_run. Replaces use of my END_OF_RUN hack.
- Added algorithms to class comments for input_transform.py and merge_output.py.
Web-based interface for live execution monitoring #706:
- 30
- Supports simple search button based on files matching a keyword
- Removed GENERATOR tag. Added JavaScript Pause, Refresh, Resume
Miscellaneous:
- Moved test_celery.py and test_MongoDBDocumentStore.py to integration test module.
- Did #866, online reconstruction overview.
- Updated control room deployment to my March 13 development version and current version of maus-apps. Tested with both Django web server and Apache.
- Checked/rewrote wiki docs for RabbitMQ, Celery and troubleshooting/recovery.
Updated by Jackson, Mike over 11 years ago
Tasks this week,
Features,
- #969 Render scalers data in web front-end.
- #970 Render image meta-data in web front-end.
- #971 Develop scalers merger/reducer - developed ReducePyScalersTable, simple_scalers_example.py and daq_reconstruct_scalers.py. Tested latter in offline reconstruction mode with web front-end.
#974 Thumbnail display and partially generated thumbs.
Bugs,
- #972 OutputPyImage sets directory to None if given None as a default directory
- #973 src/common_py/framework/single_thread.py still uses END_OF_RUN
Also, created base HTML template for web front-end with embedded CSS, last refreshed footer and hyperlinks to MICE and MAUS.
Current versions are:
Updated by Jackson, Mike over 11 years ago
Once maus-apps/devel is merged with maus-apps, the doc at https://micewww.pp.rl.ac.uk/projects/maus/wiki/MAUSWebFrontEndDeploy#Run-the-web-front-end-under-the-Apache-web-server needs updated to include the extended Apache 2 configuration instruction mentioned at #974.
Updated by Jackson, Mike over 11 years ago
Did the above wiki update but with note that it applies only to my development version.
Updated by Jackson, Mike over 11 years ago
- Status changed from Open to Closed
- % Done changed from 0 to 100
Completed agreed tasks ... will be available for support, bug fixes, help etc.
Updated by Rogers, Chris over 11 years ago
- Target version changed from Future MAUS release to MAUS-v0.2.2