Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update index.rst #162

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

javierrico
Copy link

It is not correct to define radial distance with respect to "source position". The source could be anywhere, there could be several, or none.
I propose to change "source position" by "pointing direction", but also there can be more than one pointing direction for e.g. divergent observation mode, so probably it would be better to use "center of the field of view" and define it as e.g. the direction with respect to which the observed field of view is symmetric in terms of exposure (this definition needs to be polished a bit, but you probably see the point...)

It is not correct to define radial distance  with respect to "source position". The source could be anywhere, there could be several, or none.
I propose to change "source position" by "pointing direction", but also there can be more than one pointing direction for e.g. divergent observation mode, so probably it would be better to use "center of the field of view"  and define it as e.g. the direction with respect to which the observed field of view is symmetric in terms of exposure (this definition needs to be polished a bit, but you probably see the point...)
@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 18, 2020 via email

@maxnoe
Copy link
Member

maxnoe commented Jun 18, 2020

I think it's actually both. But in the context of full enclosure vs. point-like, @javierrico is correct.
The difference that matters is that IRFs are stored dependent on field of view coordinates, not just a single position in the FoV.

PSF is stored vs. true gamma direction but also dependent on field of view coordinates.

So I would say at this position, the relevant coordinates are indeed relative to the pointing position.

@TarekHC
Copy link
Member

TarekHC commented Jun 18, 2020

Just to prolong the discussion... I agree with @javierrico with the fact that current definitions are not optimal. Although I would definitely not use pointing direction.

In full-enclosure IRFs, each PSF is stored as a function of the assumed source position, meaning each bin within the field of view would be a different assumed source position. The pointing direction is usually understood as the center of the FoV (forgetting about divergent pointing). The PSF is only calculated with respect the center of the FoV in the bin falling exactly at the center, so I would certaintly not use this within the definition.

How about:

  • Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the position of a point-like source is known.
  • Full-enclosure IRF: all IRF components are stored over the whole FoV without any direction cut. This IRF allows to perform a 3D analysis (in energy and direction) for any source in the FoV, as it does not assume a fixed source position.

@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 18, 2020 via email

@JouvinLea
Copy link
Collaborator

JouvinLea commented Jun 18, 2020

I agree with @TarekHC for the last part but pointlike IRF are also stored over the whole FoV (here what is called THETA in the DL3 format). For example in HESS, pointlike source are observed at different offset from the camera center not only 0.4°, so point like IRF are also defined across the FOV and you don't need to know the source position.
It is just that those IRFs can only be used for point-like analysis!

The only difference between full and point-like IRF is that to produce the point-like you apply a cut on the offset from the true MC position and not for the Full-enclosure. But both are defined at different offset from the center of the FOV.

@maxnoe
Copy link
Member

maxnoe commented Jun 18, 2020

You always assume a true source position when you apply an IRF, the true position (as well as true energy) is an argument of the IRF.

Yes, but I think this is not the context of this sentence. It is specifically explaining the difference of Full-Enclosure vs. Point-LIke. And there, storing multiple parameterizations in the field of view without a directional cut is the important property.

@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 18, 2020 via email

@TarekHC
Copy link
Member

TarekHC commented Jun 18, 2020

I agree with @TarekHC for the last part but pointlike IRF are also stored over the whole FoV (here what is called THETA in the DL3 format). For example in HESS, pointlike source are observed at different offset from the camera center not only 0.4°, so point like IRF are also defined across the FOV and you don't need to know the source position.

Yep, Lea you are absolutely right.

And I must say, I also agree with @jknodlseder. The definition was indeed correct: the difference between point-like and full-enclosure IRFs is that in the FE case they are stored "as a function of the offset with respect to the source position". It is indeed completely right, but I believe it would be easier to understand for non-experts if we improved the text.

Here another try:

  • Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. IRF components are stored just as a function of true energy.
  • Full-enclosure IRF: all IRF components are calculated without any direction cut, as a function of true energy and the offset with respect to the source position. These IRFs can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).

Although I must say, I don't even fully like this description: the fact that we say "offset with respect the source position" assumes we will always use a single dimension for the direction offset. If an IRF component (for instance the PSF) is not symmetric in offset, then does this definition still stand?

Maybe change "and the offset with respect to the source position" with "the direction with respect to the source position", which is more vague and would accommodate using 2D for the offset.

@JouvinLea
Copy link
Collaborator

But I think it is still confusing no?
I mean the point-like they depends both effective area and energy dispersion on the true energy and the offset from the camera center (THETA). Then for the energy dispersion also on ereco with the variable MIGRA.
This is exactly the same for the full-enclosure, same dependence. Just for the PSF, it has a dependency on the offset from the source position (RAD). but the other doesn't have so I will not add that in the global description.

@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 18, 2020 via email

@lmohrmann
Copy link
Collaborator

I agree with @JouvinLea that "IRF components are stored just as a function of true energy" is not necessarily true for point-like IRFs, or is it? I would argue that it's still possible to store the IRFs for different assumed source positions in the FoV (that's the THETA axis). One just applies a Theta2-cut in addition, such that effective area and energy dispersion is given only for events within that cut.

@JouvinLea
Copy link
Collaborator

JouvinLea commented Jun 18, 2020

@jknodlseder
But this is not true that all IRFs components are stored for the full-enclosure as a function of the distance from the source position. Only the PSF.
Maybe for both point-like and full-enclosure, we could add all IRF components are stored as a function of true energy and position in the FOV. Even if for full-enclosure with the background IRF it is not true, it doesn't depend on the true energy.

@maxnoe
Copy link
Member

maxnoe commented Jun 18, 2020

But this is not true that all IRFs components are stored for the full-enclosure as a function of the distance from the source position. Only the PSF.

Yes, this is what I was talking about.

Even if for full-enclosure with the background IRF it is not true, it doesn't depend on the true energy.

Good point.

@TarekHC
Copy link
Member

TarekHC commented Jun 18, 2020

Yes, @JouvinLea is right: only the PSF is stored vs offset.

I don't consider the background an IRF. It is a model, and the fact that is included here is just because it is useful. An IRF would be the acceptance and resolution for protons, another for helium, etc...

Let's take another shot...

  • Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. Across the field of view, IRF components are stored as a function of true energy.

  • Full-enclosure IRF: no direction cut is applied, and the IRF is computed as a function of true energy and the offset with respect to the source position. This IRF can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).

The key difference here is that now we don't say all IRF components are stored as a function of source position, but the IRF indeed takes it into account (being the IRF, the combination of these components).

@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 18, 2020 via email

@JouvinLea
Copy link
Collaborator

@TarekHC
yes I think it is much better!!
I would maybe add for full-enclosure also the "across the field of view" because it is strange to only see it now in the point-like description since both are dependant on THETA (offset from the center of the FOV).
and ok to add "the distance from the source position" for full-enclosure! if you don't say all IRFs, it works (-:(-:

@TarekHC
Copy link
Member

TarekHC commented Jun 18, 2020

@jknodlseder : is "the IRF is computed as a function of true energy and source position" really correct? The PSF is not computed as a function of the source position, but as a function of the event direction with respect to that source position, no?

Maybe you were referring to the same point raised by @JouvinLea, to solve the fact that we were not explicitly saying "across the FoV". Even if we are all tired... Let's go for the hopefully last try:

  • Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. Across the field of view, IRF components are stored as a function of true energy.

  • Full-enclosure IRF: no direction cut is applied, and the IRF is computed across the field of view as a function of true energy and direction with respect to the source position. This IRF can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).

Let's proceed this way: thumbs up if you like it. If you don't please copy-paste and apply the changes until anyone gets thumbs up from the people involved in the discussion.

@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 18, 2020 via email

@TarekHC
Copy link
Member

TarekHC commented Jun 18, 2020

Hi @jknodlseder,

  • Do you propose to just remove the "via a 3D analysis (1D in energy, 2D in direction)"?
  • Or do you propose to change "the IRF is computed across the field of view as a function of true energy and direction with respect to the source position" sentence?

I would know if you copy-pasted the text and applied the changes you see reasonable, as I requested... :)

@javierrico
Copy link
Author

I just have a couple more of comments in case they are useful:

  • I think talking about "source" when defining IRF is very confusing, particularly for what you call full enclosure IRF which for me is just "the IRF". The IRF does not depend on any source, is just a characteristic of your detector (basically the product of the PDFs for the different estimators as written in the most recent post by @jknodlseder), and therefore I find even mentioning the word "source" in its definition very very confusing.

  • I think you should explain in this text that the point-like IRF is in principle formally not needed, because it can be computed from the FE, and remark that it's included in the IRF for historical reasons and because of convenience given how many of the observed sources can be considered point-like, given the typical PSF of our instruments.

  • I think if you want to be really precise you need to go for the "more mathematical" explanation proposed by @jknodlseder for the FE case, and use it also to explain the point-like case. If you give the mathematical expressions of both versions of the IRF and define precisely the nomenclature, then it will be easier to explain and understand. I permit myself to point you to this review article: https://arxiv.org/abs/2003.13482, where I have recently explained what the IRF is (FE in your terms, in Eq. 17) and also what I call "signal morpholoy-averaged effective area" (Eq. 24), that can be particularized for point-like sources (in which case, the equation loses its DM dependence), that I believe is the Aeff entering in what you call "point-like IRF" multiplied by the PDF for the energy estimator.

@TarekHC
Copy link
Member

TarekHC commented Jun 19, 2020

@javierrico I think it is a great idea. Maybe the person that already opened a pull request should do it? :D

As always, I would be pragmatic: first try to converge on a simple couple of sentences describing both IRF types (quick change, as originally intended in this PR I guess), and later on, if anyone has time, do a complete definition of the IRF (which is indeed a great idea and would be useful for the whole IACT community!). The solution could very well be leave the text as it is, and create an issue to leave a complete description of the IRF in the to-do list of the repo.

Just a quick comment on this:

I think you should explain in this text that the point-like IRF is in principle formally not needed, because it can be computed from the FE, and remark that it's included in the IRF for historical reasons and because of convenience given how many of the observed sources can be considered point-like, given the typical PSF of our instruments.

Note the IRF @jknodlseder shared is not taking into account any cross-correlation between IRF components. Unfortunately, there is: the events with best angular resolution are generally the events with best energy resolution (associated to the IACT technique internals, for CTA mainly event multiplicity). This, for instance, allows point-like IRFs to have better energy resolution, so completely dropping them is probably not what we want. Event types could mitigate this effect, but until they are implemented, I believe we will need both.

@jknodlseder
Copy link
Collaborator

jknodlseder commented Jun 19, 2020 via email

@bkhelifi
Copy link
Collaborator

Hi all,
1/ I am not in favor of speaking of 'source' for IRFs. In the galactic science, there are several sources in the FoV and diffuse emission that is not really a source. One should speak about sky positions, of coordinates relative to the center of the FoV (to deal with all types of pointings and asymmetric sub-arrays). I support the @javierrico argument.

2/ The dependency of the PSF with parameters, ie the discussion about the factorisation, depends on the sub-array. For the foreseen northern sub-array, the PSF will very probably not be azimuthally symmetric... And as says @TarekHC, one can have a correlation between PSF and Edisp for some analysis configuration and some energy range (as seen several time in meeting).
So, one can not make strong generalities (current used. However, it is wise to precise the factorisation in the documentation, by making clear the different assumptions!

  1. the Bkg model is an IRF, obviously. And this is indeed the only IRFs stored in reconstructed energy, indeed.

  2. In the documentation, I think that one should be extremely pedagogical, in particular by precising always which energy is used, reconstructed or true. Readers might not always have a precise view on this point.

  3. It has been mentioned that non-like IRFs are not formally indeed. I would not comment on this statement, as it is linked to user usage or choice of CTAO or preference of ctapipe output or etc.... However, the purpose of gadf is not to make choices in the name of all, but deal with the different known use cases. And, today, point-like IRFs is an use case. So, I am in favor to treat also the point-like IRFs, in section explaining the different with the (default) full-containment IRFs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants