Conversation
|
@calcmogul I'd be curious to hear your input here as well! |
e7a7b5f to
9664cb3
Compare
|
|
||
| CalibrationUncertaintyContext create_calibration_uncertainty_context( | ||
| mrcal_problem_selections_t &problem_selections, | ||
| mrcal_lensmodel_t &lensmodel, const std::span<double> intrinsics, |
There was a problem hiding this comment.
Read-write spans should use std::span<double>, and read-only spans should use std::span<const double>.
| } | ||
| } | ||
|
|
||
| Jt_eigen.setFromTriplets(triplets.begin(), triplets.end()); |
There was a problem hiding this comment.
If you can ensure the triplets are added in sorted column-major order (e.g., (0, 0), (1, 0), (2, 0), (0, 1), (1, 1), (2, 1), etc.), you can use setFromSortedTriplets() instead, which should be faster.
|
|
||
| // Solve JtJ * A = dF_dbpacked^T using pre-computed factorization | ||
| Eigen::MatrixXd rhs = dF_dbpacked.transpose(); // (Nstate, 2) | ||
| Eigen::MatrixXd A = context.solver->solve(rhs); // (Nstate, 2) |
There was a problem hiding this comment.
Does assigning to a dense matrix here result in a lot of fill-in compared to the sparse return value, or is the solution essentially dense already?
There was a problem hiding this comment.
I added some solver debug prints to a dataset with 30 pictures of a 9x7 calibration grid. Our dimensions are as follows:
matth@DESKTOP-PNNHDMI:~/mrcal-java$ ./cmake_build/bin/mrcal_jni_test | head
J is 1210x50 with 21610 non-zero
JtJ is 50x50 with 1412 non-zero
dq_db_sparse is 2x50 with 56 non-zero
rhs is 50x2 with 56 non zero
A is 50x2, with 100 non-zero
I haven't been able to find a case yet where A isn't fully dense. Sadly mrcal's mrcal_unpack_solver_state_vector needs a fully dense matrix passed to it as input, so I can't make dF_dbpacked sparse.
src/mrcal-uncertainty.cpp
Outdated
| // Solve JtJ * A = dF_dbpacked^T for ALL points at once | ||
| // rhs shape: (Nstate, 2*Npoints) | ||
| Eigen::MatrixXd rhs = dF_dbpacked.transpose(); | ||
| Eigen::MatrixXd A = context.solver->solve(rhs); // (Nstate, 2*Npoints) |
There was a problem hiding this comment.
Seems like this batching approach is on average the same speed or slightly slower for a 60x40 grid of points with 6 camera snapshots. Baseline is averaging 113ms per solve, and with batching, sitting at 124ms per solve. I've reverted this commit for now.
This reverts commit 50e76b9.
75eb07f to
21a5440
Compare
|
I've uploaded the dataset to https://drive.google.com/file/d/17iuFJK7PoEAjRpSRmQZbJMH_fy_egZOj/view?usp=sharing -- it's 204 pictures at 1280x720 of an 8x8 charuco board, square size 0.75 inches. Our uncertainty estimation strategy took ~2 seconds of fully pegging a single CPU core Mrcal's python code (which for this dataset gives a very different answer which I find concerning takes 2-3 seconds. I exported a cameramodel using our json -> mrcal cameramodel tooling. So we're 30% faster (????) than mrcal? lit.
|
Is there a way to check which output is correct? Otherwise, the next step would be figuring out where the internal states of the two implementations diverge. The speedup could be caused by an implementation bug. |
|
It looks like the bug is in how PhotonVision exports camera calibration data and creates a mrcal cameramodel for charuco boards. When I exported a dataset of 104 chessboard images (see the new MR description), the results are visually at least identical. |
| } | ||
| } | ||
|
|
||
| public double[] framePosesToRtToref() { |
There was a problem hiding this comment.
PhotonVision/photonvision#2344 has some cursed code that leaks a lot of mrcal-specific code into our monorepo. This function actually isn't used afaict. We should decide at what level this marshalling-to-raw-lists lives (i'd argue, in the JNI) and make sure that happens before merge.
src/mrcal-uncertainty.cpp
Outdated
| dq_db.block(0, frame_start, 2, 3) = dq_dframes[frame] / Nboards; | ||
| } | ||
|
|
||
| // Eigen::IOFormat CommaFmt(Eigen::StreamPrecision, Eigen::DontAlignCols, ", |
There was a problem hiding this comment.
reminder to delete all remaining debug prints
spacey-sooty
left a comment
There was a problem hiding this comment.
Naming conventions in mrcal-uncertainty.cpp aren't really consistent. Are we using _ for private functions? If so why aren't the rest of them like that
| const Eigen::Matrix<double, 2, Eigen::Dynamic> &bt, // shape (2, Nstate) | ||
| cholmod_common *common) { | ||
| int Nstate = bt.cols(); | ||
| int Nrhs = bt.rows(); |
There was a problem hiding this comment.
Would it not be clearer to have some template Nstate and have Nrhs just be a constexpr variable outside the class?
There was a problem hiding this comment.
How could I template this? The matrix size is dynamic
There was a problem hiding this comment.
Nstate's size doesn't change though no? So Nstate could be a template argument we pass in from the caller side?
| // The derivative of q (pixel space location/s) wrt b (state vector) | ||
| // at some point this should be a matrix | ||
| Eigen::Matrix<double, 2, Eigen::Dynamic, Eigen::RowMajor> | ||
| _dq_db_projection_uncertainty(mrcal_point3_t pcam, mrcal_lensmodel_t lensmodel, |
There was a problem hiding this comment.
Is this not a matrix now? I assume the shape here is the same so similar comment about templating applies
| JNIArrayView<mrcal_point3_t> observations(env, jObservations); | ||
| JNIArrayView<double> intrinsics(env, jIntrinsics); | ||
| JNIArrayView<mrcal_pose_t> rtFrames(env, jRtRefFrames); |
There was a problem hiding this comment.
Wish some of our dependencies thought more like this lol
There was a problem hiding this comment.
Bruh it didn't include the part about RAII. I like that we use RAII here unlike mrcal and choldmod which are very C thinky
|
|
||
| # C++ | ||
| set(CMAKE_CXX_STANDARD 20) | ||
| set(CMAKE_CXX_STANDARD 23) |
There was a problem hiding this comment.
WPILib is C++ 20 and I feel like we are everywhere else, will this be fine?
|
Variable function and file naming are largely lifted straight from mrcal itself. I'm currently biasing towards keeping the naming to make cross referencing easier, but open to other ideas. |

We have docs on exporting calibrations to mrcal to run uncertainty analysis tools. At great pain, we are able to port their code to C++. With this hard-coded dataset of 104 pictures of a 10x10 chessboard (from mrgingham), I'm seeing our estimation take HALF as much time as mrcal's Python implementation (!!!!!!!!!!!)