;; This buffer is for notes you don't want to save, and for Lisp evaluation. ;; If you want to create a file, first visit that file with C-x C-f, ;; then enter the text in that file's own buffer. Hi Rick, Below you can find the comments from Barcelona (on behalf also of Xavi and Federico). Sorry we took so long to write you and wrote so many comments and questions, but the note is so interesting and with so much detail that it was unavoidable. We hope we can discuss further some of the topics and that they can be useful for you. For us it was certainly very useful to read the note. Best regards, Sofia (and Xavi and Federico) ------------------------------------------------------------------------------ General ------- It would be nice to have a bit more on the data selection, before starting the discussion of systematics. Some of the effects are migration from one sample to other and so, for example, the Dtheta plot could help. Another question is, are cosmic rays and other background (not from neutrino interactions in SciFi) vetoed efficiently, or should a systematic (most probably negligible) be considered? Related to this, it would be interesting to know the relative contributions to the fit from the three sub-samples for each data set. The importance of each can be guessed by the efficiencies, but the precise effect in each of the parameters could give some insight to the systematics too. Form Factors ------------- The Pseudo-Vector contribution is negligible. (pg 3) A reference would be useful, and maybe an order of magnitude (at least when the change in the Vector form-factors is discussed). The MA 1P is also important (pg 12), naively one would expect this to be part of the nucleus description and MA 1P=MA QE, no? Can they be completely different? It might be worth commenting. We would not refer to the change of Vector form factors as producing a shift in MA (pg. 27). They basically change your definition of MA as an effective parameter: the part of the cross-section which is assigned to the "Vector" changes and leaves different space for the "Axial". Is there a way to represent grafically the dependence of MA with the part which is assigned to the "vector"... Anyway, if those new form factors are the good ones, why should we not use them? To keep a reference for comparison with other measurements, a possibility would be to report always the two of them and assume that MA is an effective parameter, which somehow it is. Is there any compeling reason no to use them ? What is the Xi2 of the fit for the new form factors ? Pauli Blocking/Fermi Motion --------------------------- You almost always discuss Pauli Blocking and Fermi Motion together (since they depend on the same parameters), but they have different effects: Fermi Motion gives you harder Q2 (related to high angles), Pauli Blocking blocks low Q2 (related to low energies): that is, they affect different parts of the spectra, and different kinematic variables, and it is not obvious that it can all be reduced by the low q2 cut, as you say in several places. It is a bit worrying because we see a Data/MC disagreement in the angle between the proton and the muon, coming from the proton angle... and which could be due to the Fermi Motion. Do you see a similar effect? (you have no plot of the second track variables) On the other hand (pg. 30) you say in reality you change "the amount of Pauli Blocking". Can it be that, by the way you calculate it, you effectively disregard the changes which come from the Fermi motion? For the sigma(QE) and Q2 distributions, the Pauli blocking is the most important effect, but changes in the efficiencies and purities of the sub-samples might be also important (this is the same kind of systematic as could come from the tracking efficiency or proton rescattering)... In fact, when you describe (pg. 4) the effect of the Pauli Blocking, you don't explain the method you are using to simulate it. As far as we understand it, in NEUT, if the proton is below the K_F level the interaction is prohibited. Do you use the same model ? This is important also to know how do you do it in the other nuclear models where the nucleon momentum doesn't have an absolute maximum. The same for the Fermi Motion: are they included in the cross-section calculation or in the migration matrix? Delta production ----------------- Is it that the Pauli Blocking is also one of the main effects on the 1pi production? If you produce a real resonance, the final state - the decay of the Delta - is not connected to the production state. It is instead in the decay that the proton/neutron appears and where the Pauli Blocking should be applied. So the effect should come from the non-resonant part of the cross-section....but our understanding is that it is very poorly known... That is the cross-section unknowns are probably more important than the nuclear effects themselves. The question is: can we assign a reasonable error for those uncertainties? (Rik) I agree with the first part, it is the final state where P.B. applies. (Rik) need to write more here. 1L/3D sample ------------- The explanation for the 1L sample in (pg. 47) seems very consistent. In SciBar we have observed that there was a big bias to low energy muons when using them because many of these muons actually deposited energy in the second layer. Our understanding is that the MRD tracks always have 3 planes, and what you call 1L are muons from 1st and 2nd layers, isn't it? Or, oppositely, you have a veto cut for the 2nd layer in the MRD to define a pure 1L sample, as we can deduce from the last paragraph in pg. 42? This 15% of events between 1L/3D (pg 15) should allow you to check the later comments on 1L, no? The migrations between 1L and 3D should proceed through this sample, and it's worth checking just to make the 1L arguments stronger. Does the fact that the low energy in 1L is related to harder Q2 mean that the 1L is dominated by high angle tracks? (Rik) Some of this effect is cross-section, and some of this is due to the geometrical acceptance. I have not calculated which one contributes most. Resolutions ----------- The gaussians you present do not really fit the Q2 distribution (and the Theta_mu). In the Q2 one there even seems to be two different contributions... (Rik) There is no expectation that a gaussian is the correct function, but it does give the simplest, and not unrealistic estimate for the resolution. The shape of the q2 resolution curve tracks the shape of the angle resolution curve, that is where the non-gaussian tail is coming from.. We are also curious to know if the mean and sigma of the reconstructed q2/E_nu/angle... have some dependency with those observables. It is our understanding that you take this point into account in your migration matrices, ...but in any case it can be interesting to know this kind of effects. (Rik) Yes. More fundamental, maybe, both E_nu and q2 are highly correlated, but you are using them independently in your migration matrices...wouldn't it be better to use the muon momentum and angle as parameters to migrate ? It is also true that the muon momentum and angular resolution are correlated, but we would expect somehow smaller correlation values. (Rik) The Theta resolutions are for long tracks in SciFi, right? It is relevant to know if they are very deteriorated for proton tracks (this will tell you the resolution in DeltaTheta), and might be interesting to know the one in MRD (do you have some specific treatment for the cases in which the two tracks leave the SciFi and could be matched to the MRD?). You say the Pmu is shifted and this will be arranged later (pg 6), presumely by momentum scale, but this shift is not refered again. It would be good to give also the final numbers (to separate what part of the shift in Enu is due to this, what part to Fermi Motion?). (Rik) Yes, it still needs to be done. The shifted mean refers to a comparison of MC true value to MC reconstructed value, and is eventually applied to both the DATA and MC. There is another pmu shift which is independent and relates to some inaccuracy in the MC. Momentum Scale -------------- In fact, the discussion on the problem of the muon momentum scale is very interesting (and somehow worrisome). We understand your point that the scale, energy spectrum and MA are higly correlated and it is very difficult to resolve the problem with neutrino data alone. But we also see that, in TABLE VI, the results with the spectrum fit are systematically lower than one while the ones with material assay or test beam (not beam related) are above one. Shouldn't we believe the test beam results ? We understand that if we set the scale to 1.0 or 1.03 the value of MA jumps to values of 1.4 or 1.6, that are not very physical. But, it seems, this is showing a serious problem somewhere in the montecarlo, or the neutrino spectrum or detector simulation. Also, given the justification for Pscale, and its definition, one would expect that it would be equal for the two data-sets, and it is not. Actually, the fitted flux should also be compatible between the two data sets and that might deserve a comment. In the text, it is not clear (pg. 38 or before) how the Pscale enters the calculation (affecting Data or MC, correcting the shift in pg.6 or not, ...). Also no justification is given for choosing the values between MA=0.9 and MA=1.4 to define the maximum change in Pscale, is there a reason? (Rik) The reference on page 6 is in reconstruction only (MC vs. MC). The discussion on page 38 refers to disagreement between Data and MC, presumably because the MC is not perfect. It is applied simply by scaling the muon momentum of the data (why is it applied to data and not MC? Its the method used in the spectrum fit.). (Rik) The two justifications that this range is reasonable are the chisquare in the spectrum fit remains reasonable here, and also this is not so different than the previous experimental results on MA. QE/nonQE ratio -------------- The QE/nonQE ratio comes from the fitting of the three different samples (pg 8). Are they consistent if you fit them separately? If the samples are dominated by different types of background then maybe this could give some insight also to the modelling of the different backgrounds... (it is hard to see why we should have one number for a relative normalization between one cross-section and all the others). You are fitting the NQE/QE ratio, but!, why not also the 1pi/Npi? You will be very sensitive to that in your 2track-NQE sample and it could improve the xi2 at the end. In the fit tables (pgs. 17/21) what does it mean QE/nonQE is not exact? (Rik) The code contains parameters to account for the normalization. One parameter is QE/nonQE, the other is the overall normalization. The overall normalization is put in by hand based on a calculation, and is not automatically generated by the code. It is close, but it needs to be updated. Energy bins ----------- We come now to our biggest worry. We are a bit puzzle with your likelihood. You are saying all the time of the note that you are interested in the q2 shape. But, you use the E_nu and the actual spectrum fit to do your fit. This is (it seems) because of your binned likelihood, where a good fraction of the Xi2 comes from the fitting of the energy distributions. This way you can also, as you mention in the note, migrate the sensitivity to MA to a sensitivity to the spectrum and viceversa. This is reducing your sensitivity and introducing some conceptual problems. We also understand that the information is in fact in the q2 distribution for each Energy value. If this is the case, why not performing an unbinned likelihood where only the actual shape is taken into consideracion. It is not 100% correct because it will come from the substraction of the NQE background shape but it is possible that the overall situation gets better this way. You just simply do: lodlklh = - (dsigma(q2)/dq2)|E_nu * (dsigma(q2)/dq2)|E_nu This way the actual number of events in the bin E_nu does not affect your measurement but only the shape of the q2 for each E_nu value (dsigma(q2)/dq2)|E_nu . This way you can also remove this circular problem of the fixing the spectrum for MA and MA for the spectrum fit. It could be that the sensitivity is lower but also the systematics, and it looks like a more solid measurement... Are we missing something in the arguments ? In fact, you do something similar in FiG.12 although you never quote the result of combining these numbers. Other related comments on this: Since you have a bad Xi2 (C.L. < 4% ) in the fit (although this could be accounted for in the systematics ), it would be nice to confirm that/if there is a piece of the data that fits well. For that it would be nice to have the Xi2 for each energy beam and sample for q2>0.2. This can be considered another consistency check. (Rik) In the SciFi spectrum fit, the poor chisquare always comes from the one-track sample (> 1.5 per dof). When the same data is binned for the MA analysis, no one And due to the relation between spectrum and MA, it would be nice to show what are these values: CORR (MA,SPECTRUM_1) .... even graphically. This way people would have a better filling of how the variables are related and it could help in your discussions. Proton Rescattering ------------------- How do you include it in the fit? In the migration matrix, right? But what variables does it depend on? And should it be varied together in the background, creating a common migration of QE+nQE from the 1track to 2 track, etc? Others ------ In pg. 25 you mention a bias to low values because of the low statistics. Presumably the shift is still compatible with the systematic error. Anyway, we don't understand why you are so confident that this is a problem of the lack of statistics. (Rik) This refers to low statistics in the K2K-IIa data set. TODO look up the size. Also, in pg. 35, you mention effects at high q2. Do you have an upper limit in the q2 fit ? (Rik) There is no upper limit, though the statistics are lower at high q2, so the analysis is not as sensitive to what is happening there. For the Marteau corrections, we have the same problem with the reference. Also the parametrization you mention (pg. 17) for the coherent pion reweighting is some how different from the one in NEUT which is described in a histogram form. This is true only if we were able to pick up de correct version of NEUT. But most probably this is anyhow irrelevant for you. Last comment/question: we didn't understand the first sentence in the one but last paragraph of pg. 41. ------------------------------------------ I'm sorry to be late to send you comments. As you know, MA is a phenomenological parameter and thus, if you change the G_M and G_E, the meaning is different. Therefore, I think the change of MA value due to the change of G_M and G_E should not be in the systematic error term and each value should be presented in parallel and separately. Basically, it is better to use latest G_M & G_E. On the other hand, we can not compare the existing values obtained by the other experiments. This is the reason why I suggest you to present these values separately. Also, it is not only for your analysis but I little bit worry about the energy scale and energy shift uncertainties. As far as I remember, there existed some inconsistency between K2K-I and K2K-II. (Also, consistency of energy scale in MRD obtained from SciFi and SciBar was slightly different.) This affects your result directly and it is better to be checked again with SciBar group. Yoshinari ----------------------------- Hi Richard. As promised, I did read your M_A document with an eye to give some helpful comments. Here's what I managed. I hope it will be useful. One general comment concerns the "low" energy bin between 1-1.5 GeV, which your report takes as evidence for a possible systematic at lower energy. I have to say that I don't think the data warrants such a conclusion. If I look at Figure 12, I see 10 different data points. The low point in K2K-I seems to be just 2sigma low, and the K2K-IIa point is within 1 sigma. Given 10 data points, on average we would expect to see one 2 sigma deviation. What is the chi2 for flat, anyway, and with what confidence would we reject the flat hypothesis? Overall I am under the impression that this point, while low, is entirely consistent with a mild statistical fluctuation. There's no reason not to check carefully for missed sysematics, but no reason to conclude that there is such a thing either. Indeed, if you spend too much time trying to come up with reasons why that data point might be low, you risk biasing the analysis. (It might be useful in the future to see if there isn't some way to do this as a blind analysis.) On p. 25, in the second paragraph there is a comment that the fits could be biased by a small number of events per bin, which can be avoided by rebinning. I have a hard time reconciling this with the likelihood equation on p. 11. In a likelihood analysis, if you use the full Poisson statistics formula in the likelihood, then you should be utterly insensitive to the the statistics in any bin. Indeed, you could choose to do an unbinned maximum likelihood analysis to no ill effect. I would understand why low bin statistics would bias a chi2 fit, but not a likelihood. There are a number of unevaluated systematics that you mention. I am unclear on what you propose to do with these. Examples are the form factors for single pion production and resonant production (p. 29), and the aluminum-oxygen difference (p. 43). I am not certain what other experiments have done about these, but strictly speaking these should only be ignored if they can be justified to be negligible. If they are negligible, then we ought to be able to justify that. If we can't show that they're negligible, then they need to be estimated somehow. Leaving them hanging is rather unsatisfactory. What are your plans for these? I am curious about how the systematics are quoted. I understand that some systematics are included as free parameters in the fit (ex. nQE/QE ratio). So really the uncertainty on the fitted parameters is a combined statistical/systematic error. I agree that you can determine the uncertainties of individual components by fixing all the other components to zero, as was done to get the statistical error. But because of the fit procedure, the different uncertainties will in general be correlated with each other, and cannot be simply added in quadrature. Basically what I'd like to see is a discussion of how the erorrs in Table IV are combined to give the final quoted systematics. The way I would do this is to list the statistical error on M_A (with all other parameters fixed to zero), then list the total systematic uncertainty on M_A all fitted parameters, then list the total of the additional systematics errors that are external to the fit. p. 41, second paragraph from bottom: I am mystified by this statement "The actual situation is more complicated, because the relative amounts of 1L events and 3D events at each energy are similar, but the relative amounts of both are different at low energy and high energy." To my ear the tail end of this sentence contradicts the first part, so I'm not certain what you're trying to say. Are you saying that (1L/3D)_lowE is different from (1L/3D)_hiE? Or are you saying (hiE/lowE)_1L is different from (hiE/lowE)_3D? OK, these are all the comments I have for now. I hope they are helpful. Cheers, Scott Oser oser@physics.ubc.ca Assistant Professor of Physics / Canada Research Chair in Origins University of British Columbia Dept of Physics & Astronomy 6224 Agricultural Road Vancouver BC V6T 1Z1 Phone: 604-822-3191 Fax: 604-822-5324