(Intial Prototype): 987 mnc sketch implemenation #1000
(Intial Prototype): 987 mnc sketch implemenation #1000justaprog wants to merge 19 commits intodaphne-project:mainfrom
Conversation
…se matrix example
pdamme
left a comment
There was a problem hiding this comment.
Thanks for this PR, @justaprog, @laakhdher, et al. Adding support for MNC sketches would make a great contribution to sparsity estimation in DAPHNE. Overall, your code is already in a good initial state, the stand-alone implementation of the MNC sketch and related algorithms as described in the paper looks good to me. Nevertheless, there is still substantial work required before we can merge this PR, most importantly, the current MNC implementation needs to be integrated with the rest of the DAPHNE system, such that we can benefit from MNC when running a DaphneDSL script.
Required points: (need to be addressed before we can merge this PR)
- Finish the implementation of the remaining parts of MNC as described in the paper.
- Implement the density map estimator over an MNC sketch (your function
Edm()is currently a stub). - Implement the propagation of the MNC sketch over reorganization ops and elementwise binary ops as described in Section 4.2 of the paper.
- Implement the density map estimator over an MNC sketch (your function
- Integrate your implementation of MNC with the rest of the DAPHNE system.
- Make an MNC sketch an optional data property of a matrix (see the developer docs on adding new data properties).
- Invoke the MNC-based sparsity estimation and MNC sketch propagation during the
InferencePass(implement the necessary MLIR interfaces or traits, see the developer docs on data properties again). - Ensure that the MNC sketch of inputs of the data flow graph can be known. Such inputs could be:
- Matrices read from files (most important).
- MNC sketch as an optional part of the file meta data, plus some way of creating that file meta data automatically.
- MNC sketch creation on-the-fly (optional).
- Matrices created by various source ops (e.g.,
fill()/FillOp,seq()/SeqOp,rand()/RandMatrixOp, … (implement the inference interface for the MNC sketch for those ops). - Matrix literals (
MatrixConstantOp).
- Matrices read from files (most important).
- Please fix the code style issues to make the CI checks pass (see the contribution guidelines).
Optional points: (recommended, but not critical for merging)
- Apply some little code improvements (see my comments on individual code lines).
- Use the helper function
genGivenVals()to shorten the code of your unit test cases. Currently, you initialize CSR matrices by hardcoding the values, colIdxs, and rowOffsets arrays, which leads to long and hard-to-read test code. Instead, you could simply pass the (dense) values array togenGivenVals()and get its representation as aCSRMatrixorDenseMatrix. (For small test matrices, it doesn't hurt if you explicitly write the zeros in the code.) You can take inspiration from many existing unit test cases. - Use the MNC sketch for more accurate, structure-aware sparsity estimation of additional ops, e.g.,
SliceRowOp/SliceColOp,ExtractRowOp/ExtractColOp, etc. - Implement the MNC sketch propagation for additional ops, e.g., elementwise unary ops, etc.
| h.hc.assign(h.n, 0); | ||
|
|
||
| const std::size_t *rowOffsets = A.getRowOffsets(); | ||
| const std::size_t *colIdxs = A.getColIdxs(); |
There was a problem hiding this comment.
Please use A.getColIdxs(0); otherwise, you may run into errors if A is a view into a larger CSRMatrix.
|
|
||
| // --- 3) isDiagonal --- | ||
| // We call a matrix "diagonal" if it is square and every non-zero lies on i == j. | ||
| if(h.m == h.n && nnzEnd > nnzBegin) { |
There was a problem hiding this comment.
You could also skip the check for a diagonal matrix if h.maxHr > 1 (or h.maxHc > 1), because a diagonal matrix can have at most one non-zero per row/column.
| const std::size_t m = hA.m; | ||
| const std::size_t l = hB.n; | ||
|
|
||
| double nnz = 0.0; |
There was a problem hiding this comment.
I recommend not using double for the number of non-zeros, because the number of non-zeros is conceptually an integer (e.g., size_t). Using double could lead to round-off errors (and vastly off numbers when casted to integer) when adding up many small numbers of per-row non-zeros in a large matrix).
This hints also applies to several more such uses of double below.
| } | ||
|
|
||
| // Case 2: Extended count | ||
| else if(!hA.her.empty() || !hB.her.empty()) { |
There was a problem hiding this comment.
As the code in the then-branch uses both the extended histograms of hA and hB, I think it should be && instead of || (in contrast to Algorithm 1 in the paper).
| else if(!hA.her.empty() || !hB.her.empty()) { | ||
|
|
||
| // Exact part | ||
| for(std::size_t j = 0; j < hA.n; ++j) |
There was a problem hiding this comment.
These two loops both iterate over the same range (as hA.n is the same as hB.m). If you fuse these loops, you need to scan hA.hec only once, which should be more efficient.
| std::size_t p = (hA.nnzRows - hA.rowsEq1) * (hB.nnzCols - hB.colsEq1); | ||
|
|
||
| if(p > 0) { | ||
| double dens = Edm(hA.hc, hB.hr, p); |
There was a problem hiding this comment.
Don't forget to subtract the extended histograms from hA.hc and hB.hr as in Algorithm 1 in the paper.
| // Case 1: A is diagonal square | ||
| if (hA.isDiagonal && hA.m == hA.n && hA.nnzRows == hA.m) { | ||
| hC = hB; | ||
| hC.m = hA.m; |
There was a problem hiding this comment.
This line is unnecessary. As A is a square matrix, the number of rows in B is the same as the number of rows in A, so C automatically gets the right number of rows. The analogous applies to case 2 below.
| if (hA.isDiagonal && hA.m == hA.n && hA.nnzRows == hA.m) { | ||
| hC = hB; | ||
| hC.m = hA.m; | ||
| hC.isDiagonal = hB.isDiagonal; |
There was a problem hiding this comment.
This line is unnecessary, as hC is anyway the same as hB. The analogous applies to case 2 below.
| hC.hr.resize(hC.m); | ||
| hC.hc.resize(hC.n); | ||
|
|
||
| thread_local std::mt19937 gen(std::random_device{}()); |
There was a problem hiding this comment.
Is there a specific reason why you use thread_local here, even though your code is not explicitly multi-threaded?
Issues?
#987
What?
Based on the paper: Johanna Sommer, Matthias Boehm, Alexandre V. Evfimievski, Berthold Reinwald, Peter J. Haas (2019). "MNC: Structure-Exploiting Sparsity Estimation for Matrix Expressions". SIGMOD '19: Proceedings of the 2019 International Conference on Management of Data.
This PR contains: