-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update index.rst #162
base: master
Are you sure you want to change the base?
Update index.rst #162
Conversation
It is not correct to define radial distance with respect to "source position". The source could be anywhere, there could be several, or none. I propose to change "source position" by "pointing direction", but also there can be more than one pointing direction for e.g. divergent observation mode, so probably it would be better to use "center of the field of view" and define it as e.g. the direction with respect to which the observed field of view is symmetric in terms of exposure (this definition needs to be polished a bit, but you probably see the point...)
I think that the definition was correct, the PSF gives the probability of measuring an event at a given offset from the (assumed) true source position.
This probability may change over the field of view, which I think is what you have in mind. This change may depend on the offset from the pointing direction (or field of view centre), but there may also be azimuthal changes. This aspect is left open here.
… Le 18 juin 2020 à 07:08, Javier Rico ***@***.***> a écrit :
It is not correct to define radial distance with respect to "source position". The source could be anywhere, there could be several, or none.
I propose to change "source position" by "pointing direction", but also there can be more than one pointing direction for e.g. divergent observation mode, so probably it would be better to use "center of the field of view" and define it as e.g. the direction with respect to which the observed field of view is symmetric in terms of exposure (this definition needs to be polished a bit, but you probably see the point...)
You can view, comment on, or merge this pull request online at:
#162 <#162>
Commit Summary
Update index.rst
File Changes
M source/irfs/index.rst <https://github.com/open-gamma-ray-astro/gamma-astro-data-formats/pull/162/files#diff-e9782bca0372f11f16d4c0529153c10f> (4)
Patch Links:
https://github.com/open-gamma-ray-astro/gamma-astro-data-formats/pull/162.patch <https://github.com/open-gamma-ray-astro/gamma-astro-data-formats/pull/162.patch>
https://github.com/open-gamma-ray-astro/gamma-astro-data-formats/pull/162.diff <https://github.com/open-gamma-ray-astro/gamma-astro-data-formats/pull/162.diff>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub <#162>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QV53RZGPLWY7DDHKTEDRXGONPANCNFSM4OBG7VBQ>.
|
I think it's actually both. But in the context of full enclosure vs. point-like, @javierrico is correct. PSF is stored vs. true gamma direction but also dependent on field of view coordinates. So I would say at this position, the relevant coordinates are indeed relative to the pointing position. |
Just to prolong the discussion... I agree with @javierrico with the fact that current definitions are not optimal. Although I would definitely not use pointing direction. In full-enclosure IRFs, each PSF is stored as a function of the assumed source position, meaning each bin within the field of view would be a different assumed source position. The pointing direction is usually understood as the center of the FoV (forgetting about divergent pointing). The PSF is only calculated with respect the center of the FoV in the bin falling exactly at the center, so I would certaintly not use this within the definition. How about:
|
The text as it was was correct.
You always assume a true source position when you apply an IRF, the true position (as well as true energy) is an argument of the IRF.
… Le 18 juin 2020 à 11:00, Tarek Hassan ***@***.***> a écrit :
Just to prolong the discussion... I agree with @javierrico <https://github.com/javierrico> with the fact that current definitions are not optimal. Although I would definitely not use pointing direction.
In full-enclosure IRFs, each PSF is stored as a function of the assumed source position, meaning each bin within the field of view would be a different assumed source position. The pointing direction is usually understood as the center of the FoV (forgetting about divergent pointing). The PSF is only calculated with respect the center of the FoV in the bin falling exactly at the center, so I would certaintly not use this within the definition.
How about:
Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the position of a point-like source is known.
Full-enclosure IRF: all IRF components are stored over the whole FoV without any direction cut. This IRF allows to perform a 3D analysis (in energy and direction) for any source in the FoV, as it does not assume a fixed source position.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#162 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QVZZUVT6RJCLWRKTG43RXHJUXANCNFSM4OBG7VBQ>.
|
I agree with @TarekHC for the last part but pointlike IRF are also stored over the whole FoV (here what is called THETA in the DL3 format). For example in HESS, pointlike source are observed at different offset from the camera center not only 0.4°, so point like IRF are also defined across the FOV and you don't need to know the source position. The only difference between full and point-like IRF is that to produce the point-like you apply a cut on the offset from the true MC position and not for the Full-enclosure. But both are defined at different offset from the center of the FOV. |
Yes, but I think this is not the context of this sentence. It is specifically explaining the difference of Full-Enclosure vs. Point-LIke. And there, storing multiple parameterizations in the field of view without a directional cut is the important property. |
That's exactly the context. The difference is that a point-like PSF has a cut in offset w/r to the true position, the full enclosure has no cut in offset.
… Le 18 juin 2020 à 11:09, Maximilian Nöthe ***@***.***> a écrit :
You always assume a true source position when you apply an IRF, the true position (as well as true energy) is an argument of the IRF.
Yes, but I think this is not the context of this sentence. It is specifically explaining the difference of Full-Enclosure vs. Point-LIke. And there, storing multiple parameterizations in the field of view without a directional cut is the important property.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#162 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QV63DATOEY2MZQDZQ2DRXHKUTANCNFSM4OBG7VBQ>.
|
Yep, Lea you are absolutely right. And I must say, I also agree with @jknodlseder. The definition was indeed correct: the difference between point-like and full-enclosure IRFs is that in the FE case they are stored "as a function of the offset with respect to the source position". It is indeed completely right, but I believe it would be easier to understand for non-experts if we improved the text. Here another try:
Although I must say, I don't even fully like this description: the fact that we say "offset with respect the source position" assumes we will always use a single dimension for the direction offset. If an IRF component (for instance the PSF) is not symmetric in offset, then does this definition still stand? Maybe change "and the offset with respect to the source position" with "the direction with respect to the source position", which is more vague and would accommodate using 2D for the offset. |
But I think it is still confusing no? |
I like the proposal of @TarekHC. Maybe the following adjustment solves the issue:
Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. IRF components are stored just as a function of true energy.
Full-enclosure IRF: all IRF components are calculated without any direction offset cut, as a function of true energy and source position. These IRFs can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).
… Le 18 juin 2020 à 11:22, Tarek Hassan ***@***.***> a écrit :
I agree with @TarekHC <https://github.com/TarekHC> for the last part but pointlike IRF are also stored over the whole FoV (here what is called THETA in the DL3 format). For example in HESS, pointlike source are observed at different offset from the camera center not only 0.4°, so point like IRF are also defined across the FOV and you don't need to know the source position.
Yep, Lea you are absolutely right.
And I must say, I also agree with @jknodlseder <https://github.com/jknodlseder>. The definition was indeed correct: the difference between point-like and full-enclosure IRFs is that in the FE case they are stored "as a function of the offset with respect to the source position". It is indeed completely right, but I believe it would be easier to understand for non-experts if we improved the text.
Here another try:
Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. IRF components are stored just as a function of true energy.
Full-enclosure IRF: all IRF components are calculated without any direction cut, as a function of true energy and the offset with respect to the source position. These IRFs can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).
Although I must say, I don't even fully like this description: the fact that we say "offset with respect the source position" assumes we will always use a single dimension for the direction offset. If an IRF component (for instance the PSF) is not symmetric in offset, then does this definition still stand?
Maybe change "and the offset with respect to the source position" with "the direction with respect to the source position", which is more vague and would accommodate using 2D for the offset.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#162 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QVYFAOCUYW3RLO3SAQLRXHMHDANCNFSM4OBG7VBQ>.
|
I agree with @JouvinLea that "IRF components are stored just as a function of true energy" is not necessarily true for point-like IRFs, or is it? I would argue that it's still possible to store the IRFs for different assumed source positions in the FoV (that's the |
@jknodlseder |
Yes, this is what I was talking about.
Good point. |
Yes, @JouvinLea is right: only the PSF is stored vs offset. I don't consider the background an IRF. It is a model, and the fact that is included here is just because it is useful. An IRF would be the acceptance and resolution for protons, another for helium, etc... Let's take another shot...
The key difference here is that now we don't say all IRF components are stored as a function of source position, but the IRF indeed takes it into account (being the IRF, the combination of these components). |
@TarekHC <https://github.com/TarekHC> why is the background not an IRF? It's the response of the instrument to non-gamma-ray events. It's a model, like the PSF, effective area and energy dispersion.
Fine with your new proposals when you replace "and the offset with respect to the source position" by "and source position" which allows also for non-symmetric PSFs in the future, and includes the fact that the effective area and energy dispersion is not computed w/r to the offset.
… Le 18 juin 2020 à 11:44, Tarek Hassan ***@***.***> a écrit :
Yes, @JouvinLea <https://github.com/JouvinLea> is right: only the PSF is stored vs offset.
I don't consider the background an IRF. It is a model, and the fact that is included here is just because it is useful. An IRF would be the acceptance and resolution for protons, another for helium, etc...
Let's take another shot...
Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. Across the field of view, IRF components are stored as a function of true energy.
Full-enclosure IRF: no direction cut is applied, and the IRF is computed as a function of true energy and the offset with respect to the source position. This IRF can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).
The key difference here is that now we don't say all IRF components are stored as a function of source position, but the IRF indeed takes it into account (being the IRF, the combination of these components).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#162 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QV53CWEQF4FWM7GXAQDRXHOZJANCNFSM4OBG7VBQ>.
|
@TarekHC |
@jknodlseder : is "the IRF is computed as a function of true energy and source position" really correct? The PSF is not computed as a function of the source position, but as a function of the event direction with respect to that source position, no? Maybe you were referring to the same point raised by @JouvinLea, to solve the fact that we were not explicitly saying "across the FoV". Even if we are all tired... Let's go for the hopefully last try:
Let's proceed this way: thumbs up if you like it. If you don't please copy-paste and apply the changes until anyone gets thumbs up from the people involved in the discussion. |
The IRF answers the question: "How does the instrument respond to a photon at position p with an energy E?" Or more mathematically: "What is the probability to measure an event with position p' and energy E' if the true position and energy of the incoming photon are p and E?" Normally an IRF is defined as R(p',E' | p,E) where p and E are true position and energy and p' and E' are measured (or reconstructed) position and energy. Thus in the general form the IRF is a 6d cube (could even be higher dimensional if polarisation is considered).
The currently used factorisation is
R(p',E' | p,E) = Aeff(p,E) * PSF(p' | p,E) * Disp(E' | p,E)
which makes Aeff a 3d cube, PSF a 5d cube and Disp a 4d cube. Furthermore it is currently assumed that the PSF is azimuthally symmetric and that it only changes with offset angle, hence
PSF(delta | theta,E) with delta = angular distance(p', p), and theta being the offset angle with respect to the FoV centre
which makes PSF a 3d cube.
… Le 18 juin 2020 à 12:22, Tarek Hassan ***@***.***> a écrit :
@jknodlseder <https://github.com/jknodlseder> : is "the IRF is computed as a function of true energy and source position" really correct? The PSF is not computed as a function of the source position, but as a function of the event direction with respect to that source position, no?
Maybe you were referring to the same point raised by @JouvinLea <https://github.com/JouvinLea>, to solve the fact that we were not explicitly saying "across the FoV". Even if we are all tired... Let's go for the hopefully last try:
Point-like IRF: IRF components are calculated after applying a cut in direction offset, assuming the source is point-like. Across the field of view (FoV), IRF components are stored as a function of true energy.
Full-enclosure IRF: no direction cut is applied, and the IRF is computed across the field of view as a function of true energy and direction with respect to the source position. This IRF can be used for the analysis of any source (point-like, extended or diffuse) via a 3D analysis (1D in energy, 2D in direction).
Let's proceed this way: thumbs up if you like it. If you don't please copy-paste and apply the changes until anyone gets thumbs up from the people involved in the discussion.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#162 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QVZQNSGNLH34YVL35X3RXHTGRANCNFSM4OBG7VBQ>.
|
Hi @jknodlseder,
I would know if you copy-pasted the text and applied the changes you see reasonable, as I requested... :) |
I just have a couple more of comments in case they are useful:
|
@javierrico I think it is a great idea. Maybe the person that already opened a pull request should do it? :D As always, I would be pragmatic: first try to converge on a simple couple of sentences describing both IRF types (quick change, as originally intended in this PR I guess), and later on, if anyone has time, do a complete definition of the IRF (which is indeed a great idea and would be useful for the whole IACT community!). The solution could very well be leave the text as it is, and create an issue to leave a complete description of the IRF in the to-do list of the repo. Just a quick comment on this:
Note the IRF @jknodlseder shared is not taking into account any cross-correlation between IRF components. Unfortunately, there is: the events with best angular resolution are generally the events with best energy resolution (associated to the IACT technique internals, for CTA mainly event multiplicity). This, for instance, allows point-like IRFs to have better energy resolution, so completely dropping them is probably not what we want. Event types could mitigate this effect, but until they are implemented, I believe we will need both. |
Note the IRF @jknodlseder <https://github.com/jknodlseder> shared is not taking into account any cross-correlation between IRF components. Unfortunately, there is: the events with best angular resolution are generally the events with best energy resolution (associated to the IACT technique internals, for CTA mainly event multiplicity). This, for instance, allows point-like IRFs to have better energy resolution, so completely dropping them is probably not what we want. Event types could mitigate this effect, but until they are implemented, I believe we will need both.
That's why I wrote: "the currently used factorisation". I don't believe that it is the best factorisation for IACTs, in particular due to the different time scales involved in an IACT IRF: (1) the fast changing atmospheric conditions, and (2) the slowly changing instrument characteristics.
… —
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#162 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAW2QV6OJSWCWO2GPNC7X53RXMIM5ANCNFSM4OBG7VBQ>.
|
Hi all, 2/ The dependency of the PSF with parameters, ie the discussion about the factorisation, depends on the sub-array. For the foreseen northern sub-array, the PSF will very probably not be azimuthally symmetric... And as says @TarekHC, one can have a correlation between PSF and Edisp for some analysis configuration and some energy range (as seen several time in meeting).
|
It is not correct to define radial distance with respect to "source position". The source could be anywhere, there could be several, or none.
I propose to change "source position" by "pointing direction", but also there can be more than one pointing direction for e.g. divergent observation mode, so probably it would be better to use "center of the field of view" and define it as e.g. the direction with respect to which the observed field of view is symmetric in terms of exposure (this definition needs to be polished a bit, but you probably see the point...)