-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First set of lhe patches (towards random color/helicity): upgrade upstream MG5aMC to vecMLM and port my vector.inc patches upstream #559
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Checked that "./CODEGEN/generateAndCompare.sh gg_ttgg" works fine (also with --mad)
Revert "[lhe] add DEV to CODEGEN as an as-is copy of PROD and enable it" This reverts commit ec042acb1ec9fe8ff6b0c1313347840ccbb24b01.
Checked that "./CODEGEN/generateAndCompare.sh gg_tt" works fine without --mad : BUT it fails with --mad
…MLM and --nopatch
…_dsig1.f, auto_dsig.f, matrix1.f This commit formally merges my "patches" and Olivier's color/helicity changes in these three files (the only three files affected by Olivier's changes) BUT I am still missing all of my other "patches"
…three files auto_dsig1.f, auto_dsig.f, matrix1.f
…MLM and all patches, EXCEPT the three files auto_dsig1.f, auto_dsig.f, matrix1.f
… to the three files auto_dsig1.f, auto_dsig.f, matrix1.f cd gg_tt.mad/SubProcesses/P1_gg_ttx/ git checkout 3ad6a11 auto_dsig.f auto_dsig1.f matrix1.f This formally completes the merge of Olivier's changes and my patches, but I have not tried to build yet!
… also reenable all patches in patchMad.sh ./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch git diff --no-ext-diff -R gg_tt.mad/Source/dsample.f gg_tt.mad/Source/genps.inc gg_tt.mad/SubProcesses/addmothers.f gg_tt.mad/SubProcesses/cuts.f gg_tt.mad/SubProcesses/makefile gg_tt.mad/SubProcesses/reweight.f > CODEGEN/MG5aMC_patches/PROD/patch.common git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig.f gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/MG5aMC_patches/PROD/patch.P1 git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f > CODEGEN/MG5aMC_patches/PROD/patch.auto_dsig1.f git checkout gg_tt.mad
…s not build yet! ccache /cvmfs/sft.cern.ch/lcg/releases/gcc/11.2.0-ad950/x86_64-centos7/bin/gfortran -w -fPIC -O3 -ffast-math -fbounds-check -ffixed-line-length-132 -w -cpp -c -DMG5AMC_MEEXPORTER_CUDACPP auto_dsig1.f -I../../Source/ -fopenmp -o auto_dsig1_cudacpp.o auto_dsig1.f:518:62: 518 | CALL FBRIDGESEQUENCE(FBRIDGE_PBRIDGE, P_MULTI, ALL_G, OUT2, 0) ! 0: multi channel disabled for helicity filtering | 1 Error: Symbol ‘all_g’ at (1) has no IMPLICIT type auto_dsig1.f:569:10: 569 | JAMP2_MULTI(0,IVEC) = 2 ! workaround for oliviermattelaer/mg5amc_test#14 | 1 Error: Function ‘jamp2_multi’ at (1) has no IMPLICIT type
….inc" to define ALL_G
… now compiles... I have not tried to run it though
Revert "[lhe] fix the second build error in gg_tt.mad, which now compiles... I have not tries to run it though" This reverts commit fa2c4c73160c5ccd8920685dfc243fa384eb7023. ./tmad/teeMadX.sh -ggtt ... Executing ' ./build.none_d_inl0_hrd0/cmadevent_cudacpp < /tmp/avalassi/input_ggtt_x1_cudacpp > /tmp/avalassi/output_ggtt_x1_cudacpp' At line 124 of file addmothers.f Fortran runtime error: Index '0' of dimension 2 of array 'jamp2' below lower bound of 1 Error termination. Backtrace: I think that this is clearly related to oliviermattelaer/mg5amc_test#14
…h now compiles... I have not tried to run it though Essentialy, add another workaround for oliviermattelaer/mg5amc_test#14
Revert "[lhe] second attempt to fix the second build error in gg_tt.mad, which now compiles... I have not tried to run it though" This reverts commit 0d860d4. Executing ' ./build.none_d_inl0_hrd0/cmadevent_cudacpp < /tmp/avalassi/input_ggtt_x1_cudacpp > /tmp/avalassi/output_ggtt_x1_cudacpp' At line 124 of file addmothers.f Fortran runtime error: Index '0' of dimension 2 of array 'jamp2' below lower bound of 1
I have understod the problem: I was still using my old pre-patched version of addmothers.f! I must go back to vecMLM out of the box as much as possible first...
…dmothers.f Will then regenerate - this is a third attempt to fix the second build error in addmothers.f This is related to oliviermattelaer/mg5amc_test#14 NB eventually I must severaly clean up the codegen patches and minimise them...
(NB I need to fix the first build error in auto_dsig1.f)
… coupl.inc" to define ALL_G (simply cherry-pick 4255510)
…_tt.mad, which now compiles... I have not tried to run it though (simply cherry-pick 90ebd3d)
./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch git diff --no-ext-diff -R gg_tt.mad/Source/dsample.f gg_tt.mad/Source/genps.inc gg_tt.mad/SubProcesses/addmothers.f gg_tt.mad/SubProcesses/cuts.f gg_tt.mad/SubProcesses/makefile gg_tt.mad/SubProcesses/reweight.f gg_tt.mad/SubProcesses/unwgt.f > CODEGEN/MG5aMC_patches/PROD/patch.common git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig.f gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/MG5aMC_patches/PROD/patch.P1 git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f > CODEGEN/MG5aMC_patches/PROD/patch.auto_dsig1.f git checkout gg_tt.mad
… comparison, much better... ./tmad/teeMadX.sh -ggtt *** (2-none) Compare CMADEVENT_CUDACPP x1 events.lhe to MADEVENT events.lhe reference (with dummy colors and helicities) *** ... 6065,6068c6065,6068 < 21 -1 0 0 0 0 0.00000000000E+00 0.00000000000E+00 0.64385683044E+02 0.64385683044E+02 0.00000000000E+00 0. 2. < 21 -1 0 0 0 0 -0.00000000000E+00 -0.00000000000E+00 -0.55878442841E+03 0.55878442841E+03 0.00000000000E+00 0. 2. < 6 1 1 2 0 0 -0.44747773846E+01 0.39692677735E+01 -0.37457994480E+03 0.41264380980E+03 0.17300000000E+03 0. 2. < -6 1 1 2 0 0 0.44747773846E+01 -0.39692677735E+01 -0.11981880056E+03 0.21052630165E+03 0.17300000000E+03 0. 2. --- > 21 -1 0 0 0 0 0.00000000000E+00 0.00000000000E+00 0.64385683044E+02 0.64385683044E+02 0.00000000000E+00 0. 0. > 21 -1 0 0 0 0 -0.00000000000E+00 -0.00000000000E+00 -0.55878442841E+03 0.55878442841E+03 0.00000000000E+00 0. 0. > 6 1 1 2 0 0 -0.44747773846E+01 0.39692677735E+01 -0.37457994480E+03 0.41264380980E+03 0.17300000000E+03 0. 0. > -6 1 1 2 0 0 0.44747773846E+01 -0.39692677735E+01 -0.11981880056E+03 0.21052630165E+03 0.17300000000E+03 0. 0. ERROR! events.lhe.cpp.1 and events.lhe.ref.1 differ!
…acpp now correctly choose one helicity per event?
…city choice, but it does not work correctly! I get a different error in the LHE comparison of helicities (eg last event reference fortran gives -1 and cudacpp gives 2?) 6065,6068c6065,6068 < 21 -1 0 0 0 0 0.00000000000E+00 0.00000000000E+00 0.64385683044E+02 0.64385683044E+02 0.00000000000E+00 0. 2. < 21 -1 0 0 0 0 -0.00000000000E+00 -0.00000000000E+00 -0.55878442841E+03 0.55878442841E+03 0.00000000000E+00 0. 2. < 6 1 1 2 0 0 -0.44747773846E+01 0.39692677735E+01 -0.37457994480E+03 0.41264380980E+03 0.17300000000E+03 0. 2. < -6 1 1 2 0 0 0.44747773846E+01 -0.39692677735E+01 -0.11981880056E+03 0.21052630165E+03 0.17300000000E+03 0. 2. --- > 21 -1 0 0 0 0 0.00000000000E+00 0.00000000000E+00 0.64385683044E+02 0.64385683044E+02 0.00000000000E+00 0. -1. > 21 -1 0 0 0 0 -0.00000000000E+00 -0.00000000000E+00 -0.55878442841E+03 0.55878442841E+03 0.00000000000E+00 0. -1. > 6 1 1 2 0 0 -0.44747773846E+01 0.39692677735E+01 -0.37457994480E+03 0.41264380980E+03 0.17300000000E+03 0. -1. > -6 1 1 2 0 0 0.44747773846E+01 -0.39692677735E+01 -0.11981880056E+03 0.21052630165E+03 0.17300000000E+03 0. -1. ERROR! events.lhe.cpp.1 and events.lhe.ref.1 differ!
… helicity again: must now dummy also cudacpp helicity (from 2 to 0)
…he with dummy color/helicity)
…ectroweakFlux.inc
… auto_dsig1.f (NB_PAGE has already been changed upstream to VECSIZE_MEMMAX or VECSIZE_USED)
…TI in auto_dsig1.f These patches must be obsolete since a long time, there is no JAMP2_MULTI in the code...
…rocesses/P1_gg_ttx/matrix1.f
…rocesses/P1_gg_ttx/auto_dsig1.f
./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch git diff --no-ext-diff -R gg_tt.mad/Source/dsample.f gg_tt.mad/Source/genps.inc gg_tt.mad/SubProcesses/makefile > CODEGEN/MG5aMC_patches/PROD/patch.common git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/MG5aMC_patches/PROD/patch.P1 git checkout gg_tt.mad Note that the auto_dsig1.f patches are now in patch.P1, while patch.auto_dsig1.f has been removed Adapt patchMad.sh accordingly
…t is stable - now try build/run
./CODEGEN/generateAndCompare.sh gg_tt --mad --nopatch git diff --no-ext-diff -R gg_tt.mad/Source/dsample.f gg_tt.mad/Source/genps.inc gg_tt.mad/SubProcesses/makefile > CODEGEN/MG5aMC_patches/PROD/patch.common git diff --no-ext-diff -R gg_tt.mad/SubProcesses/P1_gg_ttx/auto_dsig1.f gg_tt.mad/SubProcesses/P1_gg_ttx/driver.f gg_tt.mad/SubProcesses/P1_gg_ttx/matrix1.f > CODEGEN/MG5aMC_patches/PROD/patch.P1 git checkout gg_tt.mad
…t is stable and builds - now try run
./tmad/teeMadX.sh -ggtt +10x
…andom hel/col upstream)
… stable This completes the first part of the "lhe" patches (towards random color/helicity in lhe files) - moved to a more recent upstream based on "vecMLM", including Olivier's changes for random hel/col - adapted the cudacpp machinery accordingly so that tests still succeed (with dummy col/hel) - moved to my own upstream "vecsize" branch based on vecMLM, where I ported many of my patches - renamed NB_PAGE_MAX as VECSIZE_MEMMAX and NB_PAGE_LOOP as VECSIZE_USED as discussed - simplified my patching strategy over the upstream, now that many patches are backported upstream - in particular, remove the MG5AMC_patches template replacements madgraph5#491 - regenerated 5 processes mad and 6 processes sa (but only run a few ggtt.mad tests) Second step will be to update the upstream further to clarify VECSIZE_USED if used as function argumen Next step will to be work on the actual random color/helicity
Hi @oliviermattelaer @roiser the first set of "lhe" patches is now complete in madgraph4gpu. This is based on the upstream mg5amcnlo/mg5amcnlo#23 From the log of 11e1d5d
I will wait for the CI to succeed and then self merge |
valassi
changed the title
WIP: upgrade upstream MG5aMC to vecMLM and start integrating random color/helicity in LHE
First set of lhe patches (towards random color/helicity): upgrade upstream MG5aMC to vecMLM and port my vector.inc patches upstream
Dec 9, 2022
All tests succeeded - I self merge |
This was referenced Dec 9, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a WIP MR to start the work on integrating the choice of random color #402 and helicity #403 in the LHE file
The very first step is upgrading the upstream MG5aMC that I use for code generation from nuvecMLM to vecMLM. This is because vecMLM, unlike nuvecMLM, includes a change by @oliviermattelaer in the fortran, to prepare it for this purpose.
See oliviermattelaer/mg5amc_test#24
The current status is that I have a working version with vecMLM, but no random color/helicity yet.
I had to modify the madX.sh script that compares LHE files in fortran and cudacpp. It seems that previosuly I was writing "0" as helicity in cudacpp, while now I am writing "2" (probably some default value?).
I start thinking that I will make this in several MRs, this first one being only the upgrade to vecMLM. I can take this also as an opportunity to clean up a bit the code generation. Still to do in particular
About the last point, in particular, I am a bit puzzled as the fortran version now seems to go a factor 3-4 faster in the ME calculation?... Maybe I need to modify the way the counters are used, as some functions for choosing helicity/color have moved elsewhere (and should still be included...)