| next >

# Discussion of flip mode paper¶

16:00-18:00 BST Friday 11th June 2021

Zoom details:**[[computing-software:MICE_zoom_info|MICE Zoom Connection]]** (login required)

- Status of the analysis, including tasks left to complete (Paul Jurj)

## Notes¶

16:00 BST

Friday 11th June

Chris Rogers

Jaroslaw Pasternak

Paul Jurj

Paul Soler

Dan Kaplan

Chris introduced the refereeing process, in particular noting the short time scale for Paul’s analysis, given thesis/writing up. He proposed roughly monthly meetings to keep close to the referee's close to the analysis and try to get rapid feedback. Proposed that the MICE note will be thesis chapter. Senior colleagues (i.e. Chris, Jaroslaw) can help with writing the paper potentially – as time allows. Submission deadline for thesis is end of the year.

Paul J showed his slides. He introduced the aim of the analysis, i.e. look at the change in RMS emittance. Paul presented the sample selection. Dan asked if the tracker fiducial cut is a cylinder or at the position of the planes. Paul replied that it is a cylinder cut. Paul showed the number of space points in TOF0 and TOF1. Dan asked why there are no events with 3 TOF1 SPs but some with 4 and 2. Paul stated he did not know. Chris suggested putting the number of events in the histogram as a number - if there is one event it may be noise, if there is 10 it is quite unlikely.

Paul J showed the TOF01 distributions. Dan asked why there is a shift between 4,6,10 mm in TOF. Paul pointed out that there is more diffuser material at higher emittance - they all have same momentum in the tracker so they lose more momentum at the diffuser.

Paul J showed the radius distributions at the diffuser. Dan asked why there is more tail at 10 mm that 4/6 mm. Propose more MCS causes a bit more scattering which leads to broader tail. This may result in particles that went through diffuser aperture at 10 mm emittance. **ACTION** check in MC that 90 mm is appropriate at 10 mm emittance (e.g. plot MC truth distribution of cut/uncut events).

Paul J showed momentum calculated using TOF01 vs momentum calculated using TKU at the tracker. Noted that there is a discrepancy between data and MC, especially at 10 mm emittance. Paul S asked if the TOF01 was normalised using the electron peak. Paul J replied that he is not. **ACTION:** use the TOF normalised to the electron TOF. Chris asked if you use the scaling or a difference. Paul Soler suggested to look in the MCS note or equivalent.

Paul J showed TKU momentum distribution. Dan noted surprise that the data distribution is narrower than the MC. However he said it looked good.

Paul J showed radius at the TKU reference plane. Noted that he needed to show maximum radius. **ACTION** update the plot for maximum radius. Chris noted that it said TKD in the axis label. **ACTION** Paul to check.

Paul J showed TKD momentum. Momentum cut between 120 and 170 excludes scraping particles, in particular in MC. Dan asked why the distribution is narrower than TKU. Paul said it is because the TKU cut has been applied

Paul J showed radius at the TKD reference plane. Noted peak in radius plot is shifted. Proposed it occurs due to offset between y at TKD.

Paul J showed number of tracks in TKU vs TKD. **ACTION** show the number of events in each bin.

Paul J showed the position distribution. Dan asked how many MC events there are compared to MC. Paul said same order of magnitude. Dan asked if that is sufficient. Paul noted that there is an issue with disk space. Dan asked how MC dependent is the analysis. Chris asked whether the MC is systematics or stats limited. Paul J replied they are about equal. Correction is done using hybrid MC. Dan commented a factor 2 improvement could be achieved. Paul S asked how dependent is the systematic uncertainty on MC data. Paul J pointed out that the systematics is done using a hybrid MC locally and there is a large data set. Dan asked why there are no stats boxes on the plots. Decided to table the discussion until the end.

Paul J showed further y, px, py distributions at TKU and TKD. He noted that there is a vertical misalignment.

Paul J showed p and pz distribution at TKU and TKD. Noted the too narrow bin widths. Paul S agreed. **ACTION** redo the binning.

Paul J showed 2D distributions x vs y. Paul J asked how the colour scheme could be presented. Paul S commented that it is not necessary to show the colour bar here as it is pretty clear.

Paul J Also showed the px vs py in 2D. Noted low pt holes in TKD. Dan asked if it might be possible to use a straight line fit if helical fit fails. **ACTION** have a look if it is reasonably quick - look at a couple of events, glance through the code.

Paul J introduced the beam sampling routine. Dan asked why the points do not sit on the curve. Paul J said that the line is MC truth and the points are MC recon. Needs to check. **ACTION** clarify the legend. Remove ISIS Cycle 2017/02 and 2017/03 if it is MC.

Paul S asked how is the sampling done. Paul J commented that he calculates a weight for each particle based on KDE probability that it should be in the final distribution based on the ratio of the pdf. Paul Soler points out that this is the absolutely most important part of the analysis; unique strength of MICE. **ACTION** please make sure that this is described in slides and documented in the paper. Highlighted in the abstract.

Paul J showed the sampled 1D phase space distributions. Noted good agreement between MC and data. **ACTION** Dan asked to review the bin widths and make them wider for clarity for transverse variables. Also check the plot vertical scales.

Dan noted the discrepancy in TKD between MC and data pz distributions. Paul J pointed out that sampling is based on TKU only. Went through a few possible reasons for discrepancy between MC and data:- energy loss in tracker planes, energy loss in absorber, tracker recon bias but note this should be same in data and MC. Paul S requested stat box as well. **ACTION** include stat box. Review binning. If the discrepancy is larger than 3 MeV then we need to do some digging.

Paul J showed 2D distributions. They looked very good.

Paul J showed Twiss alpha vs z. It is -1/2 dbeta/dz, calculated by looking at mean of Cov(x,px) and Cov(y, py) normalised by mass and emittance. Also showed Twiss beta vs z. Dan asked whether there were other emittance cases in the analysis. Paul said yes, 1.5, 3.5, 5.5 are not shown. Dan asked whether higher emittances are useful. Paul said no, scraping is too severe and sampling efficiency is poor.

Paul J showed emittance and momentum plots vs z. Noted discrepancy in 6-140, probably arising from the incident beam. Paul S commented on cooling channel performance, while noting it is challenging to see due to the axis range.

Paul J showed the emittance change (LiH data) - (No absorber data) and ditto for MC recon and MC truth. Paul S noted that the discrepancy between cooling and data could arise due to a wrong energy loss in the absorber. Chris points out that the same would be true if there was a wrong field in TKD - so it may be a systematic uncertainty.

Paul J showed tracker bias and resolution vs pz. Dan asked what are the error bars on the bias. Paul said it is the width of the (barely visible, tiny) band. Dan asked why is the bias worse downstream. Paul J said it is the lower tracker field. Dan asked what is the mechanism for bias. Paul J said it comes from the reconstruction. Dan pointed out that it needs a better argument. **ACTION:** try to get a stronger explanation e.g. what is the explanation for the sign and magnitude of the bias. Dan notes that it is subtle because the TOF pulls things as well.

Paul J showed plots of tracker efficiency. Noted worse effiency in TKU. Dan and Paul S point out that reducing the track points can improve efficiency. Chris asked does the TKU efficiency effect the analysis. Paul J says no. Dan says maybe non-gaussianness can be an issue (but sampling probably cleans that up). Paul S says can reduce the number of space points. But it may reduce resolution. **ACTION:** considering the number of digits/space points and look at the trade between resolution and efficiency, while considering requirements of the analysis; find the optimum.

Paul J introduced the bias and correction. Dan Noted that there is a significant bias and in some cases it was negative. Chris pointed out that this was observed previously and that it occurs because there is a correlation between truth and the error (so it is not a pure convolution). **ACTION:** understand, in detail, the relationship between bias and resolution in reconstructed phase space variables and the bias in emittance. Paul S points out that this is one of the single most important points in the analysis.

Paul J introduced the systematic uncertainty calculation. Paul S asked whether 50 % tracker material density uncertainty is significant. There was uncertainty as to whether the glue density was varied or the total density **ACTION** check what was varied.

Dan asked whether the geometric displacements are sufficient. Paul Soler proposes adding a transformation to the existing MC and redoing the reconstruction only. Alternative is to generate the MC data set with sufficient statistics so that the statistical errors are not significant. **ACTION** add vertical displacement; longitudinal displacement; add rotation about the X axis; add rotation about Z axis.

Dan suggested that rotating the tracker will induce a misalignment in the beam. Dan asked if a misalignment in the beam could introduce a systematic bias in the reconstruction. **ACTION:** check with a toy MC, misaligning the tracker with respect to the solenoid and understand the significance of the effect.

Paul S asked about the momentum uncertainty. Dan commented that this is handled by the centre coil field strength.

Paul S showed the systematic estimate. Chris asked to what extent the plots were random noise. Paul J commented that the sample size is 50k events. Chris noted there is a lot of scatter. Also asked that not the absolute value is shown. **ACTION:** check that the sample is not statistically dominated. Chris proposes checking the bias estimation of subsamples and using it to estimate the statistical uncertainty. If statistics is a big problem, propose scaling the field in the reconstruction to model a field uncertainty of all three SS coils, transforming the MCHits between MC and recon to model a misalignment. Statisical errors should not be significant in this case. There was some discussion as to whether it makes sense that End2 is more significant to recon than End1 - no conclusion.

Paul J presented the bias arising due to transmission losses. Dan asked if the plots had error bars. Paul J said no. Given an upstream distribution, there is a known bias. The bias is only dependent on the upstream distribution. But there is an uncertainty in the upstream distribution which does introduce a statistical bias. Showed not much bias for empty. Showed more significant bias for lH2, with apparent cooling arising due to transmission. Fit with polynomial. Chris pointed out that there is a point-to-point noise that is statistical in nature. Paul suggested that it occurs from transmission losses. Dan asked what is the loss mechanism. Paul J said that it occurs due to a fiducial cut. There was some discussion of how the bias is working. Not including particles that are lost before the absorber. Comment is that the bias is underestimated. **ACTION:** Propose cutting the tail of the incoming Gaussian. Propose changing the fiducial cut downstream and looking at the sensitivity of the bias analysis to this cut. Consider better way to do the study.

**ACTION:** Check systematic error arising from decays and incident beam impurity.

It was noted that there was some confusion as to what the "Full transmission" analysis means. Dan suggested "Full acceptance" and "Restricted acceptance" might be a better wording.

We considered tracking through the full Step IV channel with M2D powered. It was felt that the transmission may be rather not good. Maybe something to try later on.

Paul J described that he estimated statistical errors by resampling in some way that was not clear. Dan suggested splitting the beam into subsamples. **ACTION:** Paul J to present more detail on how statistical errors are estimated.

Paul J showed the change in emittance plot. Paul S asked if the plot includes the loss estimate. Paul J said yes it does. Dan asked how the calculation was done. Paul J said it was the ionisation cooling equation. Dan suggested using the G4 calculation for energy loss. We discussed what was best. Paul S noted that the lH2 plot shows quite strong discrepancy. It was noted for example that the energy loss varies significantly due to varying effective absorber thickness in lH2 (window curvature).

Paul S asked why the knee at 4.5 mm. Maybe the transmission bias correction. We agreed that this deserves a bit more work.

Paul S pointed out that dp/p is also a crucial place to look at. Proposes doing a correction to the tracker reconstruction. Paul J recalled that Chris H looked at something similar and got the same effect. Dan pointed out that it is more convincing to correct for the momentum. **ACTION:** consider correcting for the transverse momentum bias. Look at the px residuals. Consider the bias in px; and the bias in px as a function of pz, px, py and maybe x, y; correct for px bias. Repeat for py. Do for TKU and TKD.

We discussed data sets. Agreed that the M2D off is pretty ropey optics and that 3,6,10 170 is a good next step. We wondered if 3-140 was useful. Paul J said that extra statistics may be valuable, if the resampling distributions work well enough. Worth checking. Chris points out that the MC tuning of the beam is a job at 170. Jaroslaw asked at what point we back out of 170. Paul and Dan agree that we should seek to analyse the data but we have to be pragmatic.

Noted canonical angular momentum studies should follow

Date of Next Meeting: Friday 16th July 15:00 BST

Updated by Rogers, Chris over 1 year ago · 9 revisions