-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] bulkdata retrieval for this study is broken since v3.9.2 #4658
Comments
Can you test it with #4262 and the correct configuration for that enabled? |
Just do a build from the last commit in the PR? I'll try that now. |
@wayfarer3130 Thank you very much for the quick response, your suggestion seems to work! I checked out the commit you mentioned, created a new branch and cherry-picked just 2 small changes that were required for me, then did a new docker image build and tested the resulting image. As I understand, the "fix" you linked is some earlier state of the repo that was changed later and is no longer there in the latest releases. Should the code be reverted to that state? What do you think is the solution? How can I assist? |
The problem is that there are differences between the DICOMweb standard and
the behaviour of different implementations and the underlying http
standard.
The DICOMweb standard requires unquoted transfer syntax parameters. The
HTTP standard prohibits them when the value contains a * or / character.
Thus, the transferSyntax=* is blocked by several PACS systems, blocking the
overall request. Quoting it, however, requires an extra CORS request. In
either case, the whole request type actually goes against the DICOMweb
standard - the correct way of requesting compressed images is to request
the image types specifically, using either multiple accept headers or an
image/* request. But that is not well supported by the PACS server. In
the end, we decided to make it configurable - so if you take a look at the
current 3.10 master branch, you will see that you can select the accept
header to use. You will also see that you can externally set the
PUBLIC_URL as a build parameter - using something like:
docker build . --build-arg PUBLIC_URL=/ohif/
when you build the docker component from the top level of the OHIF.
So, if you set the acceptHeader in your config file to a string, you will
notice it gets used directly - so you can avoid any changes in your code -
assuming htat works in the version you are testing it with. It does work
in master, but I don't know about the 3.9.2 branch.
Bill
…On Thu, 9 Jan 2025 at 04:33, András Sallai ***@***.***> wrote:
@wayfarer3130 <https://github.com/wayfarer3130> Thank you very much for
the quick response, your suggestion seems to work!
I checked out the commit you mentioned, created a new branch and
cherry-picked just 2 small changes that were required for me, then did a
new docker image build and tested the resulting image.
Here is a PR in my fork just for showing the diff:
https://github.com/whage/Viewers/pull/1/files
As I understand, the "fix" you linked is some earlier state of the repo
that was changed later and is no longer there in the latest releases.
Should the code be reverted to that state? What do you think is the
solution? How can I assist?
—
Reply to this email directly, view it on GitHub
<#4658 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGT56XOKILXK3GYUELU6N532JY647AVCNFSM6AAAAABU2JOKVGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZZGU4DANZWHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I appreciate your comments about the accept header and the PUBLIC_URL param but those are out of scope here I think. I just showed you those changes so that we have a complete picture of what I was building and running. The bulkdata problems are solved by the commits you showed me (onto which I cherry-picked my PUBLIC_URL and accept header changes for testing) but AFAIK that is old code (from about 6 months ago) that has been changed since then and the later implementation just doesn't work with my test data. So to reiterate, my accept header problems are solved. The PUBLIC_URL thing is also solved, even though your suggestion of using it as a build-arg is better, I'll try that, but it is solved nonetheless. Can we bring this working piece of bulkdata handling code into the latest versions or would that cause something else to not work? Thanks a lot for your support! |
The bulkdata processing works in the latest version, but you have to set
the right config properties for the datasource so that it work. Try using
it with acceptHeaders defined to something simple, or with the quote
parameters set.
…On Thu, 9 Jan 2025 at 09:47, András Sallai ***@***.***> wrote:
I appreciate your comments about the accept header and the PUBLIC_URL
param but those are out of scope here I think. I just showed you those
changes so that we have a complete picture of what I was building and
running.
I had this open issue about the accept header problem: #4369
<#4369>. I closed it myself because
I found that it was fixed in a later version.
However, in later versions, the bulkdata stuff were "broken", that is why
I reported this issue.
The bulkdata problems are solved by the commits you showed me (onto which
I cherry-picked my PUBLIC_URL and accept header changes for testing) but
AFAIK that is old code (from about 6 months ago) that has been changed
since then and the later implementation just doesn't work with my test data.
Tha changes from august break it for my use case:
https://github.com/OHIF/Viewers/blame/v3.9.2/extensions/default/src/DicomWebDataSource/index.js#L390
.
So to reiterate, my accept header problems are solved. The PUBLIC_URL
thing is also solved, even though your suggestion of using it as a
build-arg is better, I'll try that, but it is solved nonetheless.
What is not solved is the bulkdata processing in the latest versions.
Can we bring this working piece of bulkdata handling code into the latest
versions or would that cause something else to not work?
Can we expect a new version where everything is working?
Thanks a lot for your support!
—
Reply to this email directly, view it on GitHub
<#4658 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGT56XJK36AIJ34QUDZHO3D2J2DW3AVCNFSM6AAAAABU2JOKVGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOBQGQ2TAOBRGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Describe the Bug
We are using the ohif/app:v3.9.2 docker image.
When trying to view the following study, an error popup is shown in the viewer when execution enters this line: https://github.com/OHIF/Viewers/blame/v3.9.2/extensions/default/src/DicomWebDataSource/index.js#L390
naturalized
is null and callingObject.keys(naturalized)
results in some exception handler that shows the popup.This is as far as I could trace the error back.
Anonymized study: https://drive.google.com/file/d/1YLerqJYESb4U_KFsPKR10B7F5Ev_iNlZ/view?usp=drive_link
As far as I can see the relevant code was changed in 3.9.2, it was still working in 3.8.5.
@wayfarer3130 can you please take a look, the affected lines seem to be from your changes from a few months ago.
Steps to Reproduce
Load the study into a dataSource and try to view it using ohif:app/v3.9.2
https://drive.google.com/file/d/1YLerqJYESb4U_KFsPKR10B7F5Ev_iNlZ/view?usp=drive_link
The current behavior
An error popup is shown:
The expected behavior
The "neutralization" (not sure yet what it means) shouldn't result in a null object, as it didn't in the previous version.
The images should be viewable.
OS
Linux
Node version
18.16.1 (as defined in the Dockerfile of v3.9.2)
Browser
Version 131.0.6778.85 (Official Build) (64-bit)
The text was updated successfully, but these errors were encountered: