Use Board::matchImagePoints and cv::solvePnP
Use Board::matchImagePoints and cv::solvePnP
Use Board::matchImagePoints and cv::solvePnP
Use Board::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraAruco(List<Mat>, Mat, Mat, Board, Size, Mat, Mat, List<Mat>, List<Mat>, int, TermCriteria)
Use Board::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraArucoExtended(List<Mat>, Mat, Mat, Board, Size, Mat, Mat, List<Mat>, List<Mat>, Mat, Mat, Mat)
Use Board::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraArucoExtended(List<Mat>, Mat, Mat, Board, Size, Mat, Mat, List<Mat>, List<Mat>, Mat, Mat, Mat, int)
Use Board::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraArucoExtended(List<Mat>, Mat, Mat, Board, Size, Mat, Mat, List<Mat>, List<Mat>, Mat, Mat, Mat, int, TermCriteria)
Use Board::matchImagePoints and cv::solvePnP
Use CharucoBoard::matchImagePoints and cv::solvePnP
Use CharucoBoard::matchImagePoints and cv::solvePnP
Use CharucoBoard::matchImagePoints and cv::solvePnP
Use CharucoBoard::matchImagePoints and cv::solvePnP
Use CharucoBoard::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraCharucoExtended(List<Mat>, List<Mat>, CharucoBoard, Size, Mat, Mat, List<Mat>, List<Mat>, Mat, Mat, Mat)
Use CharucoBoard::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraCharucoExtended(List<Mat>, List<Mat>, CharucoBoard, Size, Mat, Mat, List<Mat>, List<Mat>, Mat, Mat, Mat, int)
Use CharucoBoard::matchImagePoints and cv::solvePnP
org.opencv.aruco.Aruco.calibrateCameraCharucoExtended(List<Mat>, List<Mat>, CharucoBoard, Size, Mat, Mat, List<Mat>, List<Mat>, Mat, Mat, Mat, int, TermCriteria)
Use CharucoBoard::matchImagePoints and cv::solvePnP
Use CharucoDetector::detectDiamonds
Use CharucoDetector::detectDiamonds
Use CharucoDetector::detectDiamonds
Use CharucoDetector::detectDiamonds
Use class ArucoDetector::detectMarkers
Use class ArucoDetector::detectMarkers
Use class ArucoDetector::detectMarkers
Use Board::matchImagePoints and cv::solvePnP
Use Board::matchImagePoints and cv::solvePnP
Use CharucoBoard::matchImagePoints and cv::solvePnP
SEE: use cv::drawFrameAxes to get world coordinate system axis for object points
Use CharucoBoard::matchImagePoints and cv::solvePnP
SEE: use cv::drawFrameAxes to get world coordinate system axis for object points
Use Board::matchImagePoints
Use CharucoDetector::detectBoard
Use CharucoDetector::detectBoard
Use CharucoDetector::detectBoard
Use CharucoDetector::detectBoard
Use class ArucoDetector::refineDetectedMarkers
Use class ArucoDetector::refineDetectedMarkers
Use class ArucoDetector::refineDetectedMarkers
Use class ArucoDetector::refineDetectedMarkers
Use class ArucoDetector::refineDetectedMarkers
Use class ArucoDetector::refineDetectedMarkers
org.opencv.aruco.Aruco.refineDetectedMarkers(Mat, Board, List<Mat>, Mat, List<Mat>, Mat, Mat, float, float, boolean, Mat)
Use class ArucoDetector::refineDetectedMarkers
org.opencv.aruco.Aruco.refineDetectedMarkers(Mat, Board, List<Mat>, Mat, List<Mat>, Mat, Mat, float, float, boolean, Mat, DetectorParameters)
Use class ArucoDetector::refineDetectedMarkers
Use CharucoBoard::checkCharucoCornersCollinear
Use Mat::convertTo with CV_16F instead.
Current implementation doesn't corresponding to this documentation.
The exact meaning of the return value depends on the threading framework used by OpenCV library:
-
TBB
- Unsupported with current 4.1 TBB release. Maybe will be supported in future.
-
OpenMP
- The thread number, within the current team, of the calling thread.
-
Concurrency
- An ID for the virtual processor that the current context is executing on (0
for master thread and unique number for others, but not necessary 1,2,3,...).
-
GCD
- System calling thread's ID. Never returns 0 inside parallel region.
-
C=
- The index of the current parallel task.
SEE: setNumThreads, getNumThreads
This method will be removed in the future release.
Use int getLayerId(const String &layer)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags)
Transform the source image using the following transformation (See REF: polar_remaps_reference_image "Polar remaps reference image c)"):
\(\begin{array}{l}
dst( \rho , \phi ) = src(x,y) \\
dst.size() \leftarrow src.size()
\end{array}\)
where
\(\begin{array}{l}
I = (dx,dy) = (x - center.x,y - center.y) \\
\rho = Kmag \cdot \texttt{magnitude} (I) ,\\
\phi = angle \cdot \texttt{angle} (I)
\end{array}\)
and
\(\begin{array}{l}
Kx = src.cols / maxRadius \\
Ky = src.rows / 2\Pi
\end{array}\)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG);
Transform the source image using the following transformation (See REF: polar_remaps_reference_image "Polar remaps reference image d)"):
\(\begin{array}{l}
dst( \rho , \phi ) = src(x,y) \\
dst.size() \leftarrow src.size()
\end{array}\)
where
\(\begin{array}{l}
I = (dx,dy) = (x - center.x,y - center.y) \\
\rho = M \cdot log_e(\texttt{magnitude} (I)) ,\\
\phi = Kangle \cdot \texttt{angle} (I) \\
\end{array}\)
and
\(\begin{array}{l}
M = src.cols / log_e(maxRadius) \\
Kangle = src.rows / 2\Pi \\
\end{array}\)
The function emulates the human "foveal" vision and can be used for fast scale and
rotation-invariant template matching, for object tracking and so forth.
use loadOCRHMMClassifier instead
loadOCRHMMClassifier instead