Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential enhancement for using BPP NNLS solver #15

Open
mvfki opened this issue Feb 7, 2024 · 0 comments
Open

Potential enhancement for using BPP NNLS solver #15

mvfki opened this issue Feb 7, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@mvfki
Copy link

mvfki commented Feb 7, 2024

Just got to know that we are actually using BPP NNLS solver. Then I would really recommend updating the implementation that make full use of the memory efficiency of that

For example at _iNMF_ANLS.py line 135 where we start to solve H. We do something like:

H = nnls(A, B)

Here A is a concatenated matrix of W+V_i and sqrtLambda*V_i, and B is a concatenated matrix of X_i and zeros. Looking into the BPP NNLS code we can see that it has a third argument is_input_prod which means you can turn it to True if you use np.matmul(A.T, A) and np.matmul(A.T, B) instead and it will be equivalent. The best part of this design is that, we don't have to literally derive A or B, which are giant, in order to then get A.T*A or A.T*B. We can do the scratch-book calculation that AtA = (W+V_i).T * (W+V_i) + lambda*(V_i.T * V_i) and AtB = (W+V_i) * X_i. This would be more efficient in both speed and memory usage. And the similar thing can be applied to the update of V and W, as well as the H solving step in online iNMF.

@mvfki mvfki added the enhancement New feature or request label Feb 7, 2024
@theAeon theAeon self-assigned this Jun 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants