OpenCvSharp OpenCV Functions of C++ I/F (cv::xxx) The ratio of a circle's circumference to its diameter set up P/Invoke settings only for .NET 2.0/3.0/3.5 引数がnullの時はIntPtr.Zeroに変換する converts rotation vector to rotation matrix or vice versa using Rodrigues transformation Input rotation vector (3x1 or 1x3) or rotation matrix (3x3). Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively. Optional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial derivatives of the output array components with respect to the input array components. converts rotation vector to rotation matrix using Rodrigues transformation Input rotation vector (3x1). Output rotation matrix (3x3). Optional output Jacobian matrix, 3x9, which is a matrix of partial derivatives of the output array components with respect to the input array components. converts rotation vector to rotation matrix using Rodrigues transformation Input rotation vector (3x1). Output rotation matrix (3x3). converts rotation matrix to rotation vector using Rodrigues transformation Input rotation matrix (3x3). Output rotation vector (3x1). Optional output Jacobian matrix, 3x9, which is a matrix of partial derivatives of the output array components with respect to the input array components. converts rotation matrix to rotation vector using Rodrigues transformation Input rotation matrix (3x3). Output rotation vector (3x1). computes the best-fit perspective transformation mapping srcPoints to dstPoints. Coordinates of the points in the original plane, a matrix of the type CV_32FC2 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 Method used to computed a homography matrix. Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only) Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored. computes the best-fit perspective transformation mapping srcPoints to dstPoints. Coordinates of the points in the original plane Coordinates of the points in the target plane Method used to computed a homography matrix. Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC method only) Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored. Computes RQ decomposition of 3x3 matrix 3x3 input matrix. Output 3x3 upper-triangular matrix. Output 3x3 orthogonal matrix. Optional output 3x3 rotation matrix around x-axis. Optional output 3x3 rotation matrix around y-axis. Optional output 3x3 rotation matrix around z-axis. Computes RQ decomposition of 3x3 matrix 3x3 input matrix. Output 3x3 upper-triangular matrix. Output 3x3 orthogonal matrix. Computes RQ decomposition of 3x3 matrix 3x3 input matrix. Output 3x3 upper-triangular matrix. Output 3x3 orthogonal matrix. Optional output 3x3 rotation matrix around x-axis. Optional output 3x3 rotation matrix around y-axis. Optional output 3x3 rotation matrix around z-axis. Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector 3x4 input projection matrix P. Output 3x3 camera matrix K. Output 3x3 external rotation matrix R. Output 4x1 translation vector T. Optional 3x3 rotation matrix around x-axis. Optional 3x3 rotation matrix around y-axis. Optional 3x3 rotation matrix around z-axis. ptional three-element vector containing three Euler angles of rotation in degrees. Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector 3x4 input projection matrix P. Output 3x3 camera matrix K. Output 3x3 external rotation matrix R. Output 4x1 translation vector T. Optional 3x3 rotation matrix around x-axis. Optional 3x3 rotation matrix around y-axis. Optional 3x3 rotation matrix around z-axis. ptional three-element vector containing three Euler angles of rotation in degrees. Decomposes the projection matrix into camera matrix and the rotation martix and the translation vector 3x4 input projection matrix P. Output 3x3 camera matrix K. Output 3x3 external rotation matrix R. Output 4x1 translation vector T. computes derivatives of the matrix product w.r.t each of the multiplied matrix coefficients First multiplied matrix. Second multiplied matrix. First output derivative matrix d(A*B)/dA of size A.rows*B.cols X A.rows*A.cols . Second output derivative matrix d(A*B)/dB of size A.rows*B.cols X B.rows*B.cols . composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments First rotation vector. First translation vector. Second rotation vector. Second translation vector. Output rotation vector of the superposition. Output translation vector of the superposition. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments First rotation vector. First translation vector. Second rotation vector. Second translation vector. Output rotation vector of the superposition. Output translation vector of the superposition. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. Optional output derivatives of rvec3 or tvec3 with regard to rvec1, rvec2, tvec1 and tvec2, respectively. composes 2 [R|t] transformations together. Also computes the derivatives of the result w.r.t the arguments First rotation vector. First translation vector. Second rotation vector. Second translation vector. Output rotation vector of the superposition. Output translation vector of the superposition. projects points from the model coordinate space to the image coordinates. Also computes derivatives of the image coordinates w.r.t the intrinsic and extrinsic camera parameters Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points in the view. Rotation vector (3x1). Translation vector (3x1). Camera matrix (3x3) Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel Optional output 2Nx(10 + numDistCoeffs) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. Optional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix. projects points from the model coordinate space to the image coordinates. Also computes derivatives of the image coordinates w.r.t the intrinsic and extrinsic camera parameters Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points in the view. Rotation vector (3x1). Translation vector (3x1). Camera matrix (3x3) Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel Optional output 2Nx(10 + numDistCoeffs) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. Optional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix. Finds an object pose from 3D-2D point correspondences. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here. Input camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system. Output translation vector. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. Method for solving a PnP problem: Finds an object pose from 3D-2D point correspondences. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here. Input camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system. Output translation vector. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. Method for solving a PnP problem: computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here. Input 3x3 camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system. Output translation vector. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. Number of iterations. Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The probability that the algorithm produces a useful result. Output vector that contains indices of inliers in objectPoints and imagePoints . Method for solving a PnP problem computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here. Input 3x3 camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system. Output translation vector. computes the camera pose from a few 3D points and the corresponding projections. The outliers are possible. Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. List<Point3f> can be also passed here. Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. List<Point2f> can be also passed here. Input 3x3 camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Output rotation vector that, together with tvec , brings points from the model coordinate system to the camera coordinate system. Output translation vector. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. Number of iterations. Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The probability that the algorithm produces a useful result. Output vector that contains indices of inliers in objectPoints and imagePoints . Method for solving a PnP problem initializes camera matrix from a few 3D points and the corresponding projections. Vector of vectors (vector<vector<Point3d>>) of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated. Vector of vectors (vector<vector<Point2d>>) of the projections of the calibration pattern points. In the old interface all the per-view vectors are concatenated. Image size in pixels used to initialize the principal point. If it is zero or negative, both f_x and f_y are estimated independently. Otherwise, f_x = f_y * aspectRatio . initializes camera matrix from a few 3D points and the corresponding projections. Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the per-view vectors are concatenated. Vector of vectors of the projections of the calibration pattern points. In the old interface all the per-view vectors are concatenated. Image size in pixels used to initialize the principal point. If it is zero or negative, both f_x and f_y are estimated independently. Otherwise, f_x = f_y * aspectRatio . Finds the positions of internal corners of the chessboard. Source chessboard view. It must be an 8-bit grayscale or color image. Number of inner corners per a chessboard row and column ( patternSize = Size(points_per_row,points_per_colum) = Size(columns, rows) ). Output array of detected corners. Various operation flags that can be zero or a combination of the ChessboardFlag values The function returns true if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns false. Finds the positions of internal corners of the chessboard. Source chessboard view. It must be an 8-bit grayscale or color image. Number of inner corners per a chessboard row and column ( patternSize = Size(points_per_row,points_per_colum) = Size(columns, rows) ). Output array of detected corners. Various operation flags that can be zero or a combination of the ChessboardFlag values The function returns true if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). Otherwise, if the function fails to find all the corners or reorder them, it returns false. finds subpixel-accurate positions of the chessboard corners finds subpixel-accurate positions of the chessboard corners Renders the detected chessboard corners. Destination image. It must be an 8-bit color image. Number of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)). Array of detected corners, the output of findChessboardCorners. Parameter indicating whether the complete board was found or not. The return value of findChessboardCorners() should be passed here. Renders the detected chessboard corners. Destination image. It must be an 8-bit color image. Number of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)). Array of detected corners, the output of findChessboardCorners. Parameter indicating whether the complete board was found or not. The return value of findChessboardCorners() should be passed here. Finds centers in the grid of circles. grid view of input circles; it must be an 8-bit grayscale or color image. number of circles per row and column ( patternSize = Size(points_per_row, points_per_colum) ). output array of detected centers. various operation flags that can be one of the FindCirclesGridFlag values feature detector that finds blobs like dark circles on light background. Finds centers in the grid of circles. grid view of input circles; it must be an 8-bit grayscale or color image. number of circles per row and column ( patternSize = Size(points_per_row, points_per_colum) ). output array of detected centers. various operation flags that can be one of the FindCirclesGridFlag values feature detector that finds blobs like dark circles on light background. finds intrinsic and extrinsic camera parameters from several fews of a known calibration pattern. In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together. In the new interface it is a vector of vectors of the projections of calibration pattern points. imagePoints.Count() and objectPoints.Count() and imagePoints[i].Count() must be equal to objectPoints[i].Count() for each i. Size of the image used only to initialize the intrinsic camera matrix. Output 3x3 floating-point camera matrix. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function. Output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1) Output vector of translation vectors estimated for each pattern view. Different flags that may be zero or a combination of the CalibrationFlag values Termination criteria for the iterative optimization algorithm. finds intrinsic and extrinsic camera parameters from several fews of a known calibration pattern. In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together. In the new interface it is a vector of vectors of the projections of calibration pattern points. imagePoints.Count() and objectPoints.Count() and imagePoints[i].Count() must be equal to objectPoints[i].Count() for each i. Size of the image used only to initialize the intrinsic camera matrix. Output 3x3 floating-point camera matrix. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function. Output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. Output vector of rotation vectors (see Rodrigues() ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1) Output vector of translation vectors estimated for each pattern view. Different flags that may be zero or a combination of the CalibrationFlag values Termination criteria for the iterative optimization algorithm. computes several useful camera characteristics from the camera matrix, camera frame resolution and the physical sensor size. Input camera matrix that can be estimated by calibrateCamera() or stereoCalibrate() . Input image size in pixels. Physical width of the sensor. Physical height of the sensor. Output field of view in degrees along the horizontal sensor axis. Output field of view in degrees along the vertical sensor axis. Focal length of the lens in mm. Principal point in pixels. fy / fx computes several useful camera characteristics from the camera matrix, camera frame resolution and the physical sensor size. Input camera matrix that can be estimated by calibrateCamera() or stereoCalibrate() . Input image size in pixels. Physical width of the sensor. Physical height of the sensor. Output field of view in degrees along the horizontal sensor axis. Output field of view in degrees along the vertical sensor axis. Focal length of the lens in mm. Principal point in pixels. fy / fx finds intrinsic and extrinsic parameters of a stereo camera Vector of vectors of the calibration pattern points. Vector of vectors of the projections of the calibration pattern points, observed by the first camera. Vector of vectors of the projections of the calibration pattern points, observed by the second camera. Input/output first camera matrix Input/output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The output vector length depends on the flags. Input/output second camera matrix. The parameter is similar to cameraMatrix1 . Input/output lens distortion coefficients for the second camera. The parameter is similar to distCoeffs1 . Size of the image used only to initialize intrinsic camera matrix. Output rotation matrix between the 1st and the 2nd camera coordinate systems. Output translation vector between the coordinate systems of the cameras. Output essential matrix. Output fundamental matrix. Termination criteria for the iterative optimization algorithm. Different flags that may be zero or a combination of the CalibrationFlag values finds intrinsic and extrinsic parameters of a stereo camera Vector of vectors of the calibration pattern points. Vector of vectors of the projections of the calibration pattern points, observed by the first camera. Vector of vectors of the projections of the calibration pattern points, observed by the second camera. Input/output first camera matrix Input/output vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The output vector length depends on the flags. Input/output second camera matrix. The parameter is similar to cameraMatrix1 . Input/output lens distortion coefficients for the second camera. The parameter is similar to distCoeffs1 . Size of the image used only to initialize intrinsic camera matrix. Output rotation matrix between the 1st and the 2nd camera coordinate systems. Output translation vector between the coordinate systems of the cameras. Output essential matrix. Output fundamental matrix. Termination criteria for the iterative optimization algorithm. Different flags that may be zero or a combination of the CalibrationFlag values computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ). Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ). Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller. Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller. computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ). Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. computes the rectification transformation for a stereo camera from its intrinsic and extrinsic parameters First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D() ). Operation flags that may be zero or CV_CALIB_ZERO_DISPARITY. If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Obviously, any intermediate value yields an intermediate result between those two extreme cases. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(). When (0,0) is passed (default), it is set to the original imageSize . Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller. Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller. computes the rectification transformation for an uncalibrated stereo camera (zero distortion is assumed) Array of feature points in the first image. The corresponding points in the second image. The same formats as in findFundamentalMat() are supported. Input fundamental matrix. It can be computed from the same set of point pairs using findFundamentalMat() . Size of the image. Output rectification homography matrix for the first image. Output rectification homography matrix for the second image. Optional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which |points2[i]^T * F * points1[i]| > threshold ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers. computes the rectification transformation for an uncalibrated stereo camera (zero distortion is assumed) Array of feature points in the first image. The corresponding points in the second image. The same formats as in findFundamentalMat() are supported. Input fundamental matrix. It can be computed from the same set of point pairs using findFundamentalMat() . Size of the image. Output rectification homography matrix for the first image. Output rectification homography matrix for the second image. Optional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which |points2[i]^T * F * points1[i]| > threshold ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers. computes the rectification transformations for 3-head camera, where all the heads are on the same line. Returns the new camera matrix based on the free scaling parameter. Input camera matrix. Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the array is null, the zero distortion coefficients are assumed. Original image size. Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). Image size after rectification. By default,it is set to imageSize . Optional output rectangle that outlines all-good-pixels region in the undistorted image. See roi1, roi2 description in stereoRectify() . Optional flag that indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. optimal new camera matrix Returns the new camera matrix based on the free scaling parameter. Input camera matrix. Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the array is null, the zero distortion coefficients are assumed. Original image size. Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). Image size after rectification. By default,it is set to imageSize . Optional output rectangle that outlines all-good-pixels region in the undistorted image. See roi1, roi2 description in stereoRectify() . Optional flag that indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. optimal new camera matrix converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1)) Input vector of N-dimensional points. Output vector of N+1-dimensional points. converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1)) Input vector of N-dimensional points. Output vector of N+1-dimensional points. converts point coordinates from normal pixel coordinates to homogeneous coordinates ((x,y)->(x,y,1)) Input vector of N-dimensional points. Output vector of N+1-dimensional points. converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z)) Input vector of N-dimensional points. Output vector of N-1-dimensional points. converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z)) Input vector of N-dimensional points. Output vector of N-1-dimensional points. converts point coordinates from homogeneous to normal pixel coordinates ((x,y,z)->(x/z, y/z)) Input vector of N-dimensional points. Output vector of N-1-dimensional points. Converts points to/from homogeneous coordinates. Input array or vector of 2D, 3D, or 4D points. Output vector of 2D, 3D, or 4D points. Calculates a fundamental matrix from the corresponding points in two images. Array of N points from the first image. The point coordinates should be floating-point (single or double precision). Array of the second image points of the same size and format as points1 . Method for computing a fundamental matrix. Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise. Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s. fundamental matrix Calculates a fundamental matrix from the corresponding points in two images. Array of N points from the first image. The point coordinates should be floating-point (single or double precision). Array of the second image points of the same size and format as points1 . Method for computing a fundamental matrix. Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise. Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s. fundamental matrix For points in an image of a stereo pair, computes the corresponding epilines in the other image. Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2. Index of the image (1 or 2) that contains the points . Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() . Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) . For points in an image of a stereo pair, computes the corresponding epilines in the other image. Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2. Index of the image (1 or 2) that contains the points . Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() . Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) . For points in an image of a stereo pair, computes the corresponding epilines in the other image. Input points. N \times 1 or 1 x N matrix of type CV_32FC2 or CV_64FC2. Index of the image (1 or 2) that contains the points . Fundamental matrix that can be estimated using findFundamentalMat() or stereoRectify() . Output vector of the epipolar lines corresponding to the points in the other image. Each line ax + by + c=0 is encoded by 3 numbers (a, b, c) . Reconstructs points by triangulation. 3x4 projection matrix of the first camera. 3x4 projection matrix of the second camera. 2xN array of feature points in the first image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1. 2xN array of corresponding points in the second image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1. 4xN array of reconstructed points in homogeneous coordinates. Reconstructs points by triangulation. 3x4 projection matrix of the first camera. 3x4 projection matrix of the second camera. 2xN array of feature points in the first image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1. 2xN array of corresponding points in the second image. In case of c++ version it can be also a vector of feature points or two-channel matrix of size 1xN or Nx1. 4xN array of reconstructed points in homogeneous coordinates. Refines coordinates of corresponding points. 3x3 fundamental matrix. 1xN array containing the first set of points. 1xN array containing the second set of points. The optimized points1. The optimized points2. Refines coordinates of corresponding points. 3x3 fundamental matrix. 1xN array containing the first set of points. 1xN array containing the second set of points. The optimized points1. The optimized points2. filters off speckles (small regions of incorrectly computed disparity) The input 16-bit signed disparity image The disparity value used to paint-off the speckles The maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithm Maximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value. The optional temporary buffer to avoid memory allocation within the function. computes valid disparity ROI from the valid ROIs of the rectified images (that are returned by cv::stereoRectify()) validates disparity using the left-right check. The matrix "cost" should be computed by the stereo correspondence algorithm reprojects disparity image to 3D: (x,y,d)->(X,Y,Z) using the matrix Q returned by cv::stereoRectify Input single-channel 8-bit unsigned, 16-bit signed, 32-bit signed or 32-bit floating-point disparity image. Output 3-channel floating-point image of the same size as disparity. Each element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. 4 x 4 perspective transformation matrix that can be obtained with stereoRectify(). Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoBM::operator() ) are transformed to 3D points with a very large Z value (currently set to 10000). he optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F. Computes an optimal affine transformation between two 3D point sets. First input 3D point set. Second input 3D point set. Output 3D affine transformation matrix 3 x 4 . Output vector indicating which points are inliers. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Calculates the Sampson Distance between two points. first homogeneous 2d point second homogeneous 2d point F fundamental matrix The computed Sampson distance. https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp#L1109 Calculates the Sampson Distance between two points. first homogeneous 2d point second homogeneous 2d point F fundamental matrix The computed Sampson distance. https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp#L1109 Computes an optimal affine transformation between two 2D point sets. First input 2D point set containing (X,Y). Second input 2D point set containing (x,y). Output vector indicating which points are inliers (1-inlier, 0-outlier). Robust method used to compute transformation. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.Applies only to RANSAC. The maximum number of robust method iterations. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough.Values too close to 1 can slow down the estimation significantly.Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method. Output 2D affine transformation matrix \f$2 \times 3\f$ or empty matrix if transformation could not be estimated. Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. First input 2D point set. Second input 2D point set. Output vector indicating which points are inliers. Robust method used to compute transformation. Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.Applies only to RANSAC. The maximum number of robust method iterations. Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough.Values too close to 1 can slow down the estimation significantly.Values lower than 0.8-0.9 can result in an incorrectly estimated transformation. Output 2D affine transformation (4 degrees of freedom) matrix 2x3 or empty matrix if transformation could not be estimated. Decompose a homography matrix to rotation(s), translation(s) and plane normal(s). The input homography matrix between two images. The input intrinsic camera calibration matrix. Array of rotation matrices. Array of translation matrices. Array of plane normal matrices. Filters homography decompositions based on additional information. Vector of rotation matrices. Vector of plane normal matrices. Vector of (rectified) visible reference points before the homography is applied Vector of (rectified) visible reference points after the homography is applied Vector of int indices representing the viable solution set after filtering optional Mat/Vector of 8u type representing the mask for the inliers as given by the findHomography function corrects lens distortion for the given camera matrix and distortion coefficients Input (distorted) image. Output (corrected) image that has the same size and type as src . Input camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Camera matrix of the distorted image. By default, it is the same as cameraMatrix but you may additionally scale and shift the result by using a different matrix. initializes maps for cv::remap() to correct lens distortion and optionally rectify the image initializes maps for cv::remap() for wide-angle returns the default new camera matrix (by default it is the same as cameraMatrix unless centerPricipalPoint=true) Input camera matrix. Camera view image size in pixels. Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not. the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false), or the modified one (when centerPrincipalPoint=true). Computes the ideal point coordinates from the observed point coordinates. Observed point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2). Output ideal point coordinates after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates. Camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity transformation is used. New camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity new camera matrix is used. The methods in this class use a so-called fisheye camera model. Projects points using fisheye model. The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters.Optionally, the function computes Jacobians - matrices of partial derivatives of image points coordinates(as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. Array of object points, 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view. Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or vector<Point2f>. Camera matrix Input vector of distortion coefficients The skew coefficient. Optional output 2Nx15 jacobian matrix of derivatives of image points with respect to components of the focal lengths, coordinates of the principal point, distortion coefficients, rotation vector, translation vector, and the skew.In the old interface different components of the jacobian are returned via different output parameters. Distorts 2D points using fisheye model. Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the number of points in the view. Output array of image points, 1xN/Nx1 2-channel, or vector<Point2f> . Camera matrix Input vector of distortion coefficients The skew coefficient. Undistorts 2D points using fisheye model Array of object points, 1xN/Nx1 2-channel (or vector<Point2f> ), where N is the number of points in the view. Output array of image points, 1xN/Nx1 2-channel, or vector>Point2f> . Camera matrix Input vector of distortion coefficients (k_1, k_2, k_3, k_4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero distortion is used, if R or P is empty identity matrixes are used. Camera matrix Input vector of distortion coefficients (k_1, k_2, k_3, k_4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Undistorted image size. Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details. The first output map. The second output map. Transforms an image to compensate for fisheye lens distortion. image with fisheye lens distortion. Output image with compensated fisheye lens distortion. Camera matrix Input vector of distortion coefficients (k_1, k_2, k_3, k_4). Camera matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix. Estimates new camera matrix for undistortion or rectification. Camera matrix Input vector of distortion coefficients (k_1, k_2, k_3, k_4). Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel New camera matrix (3x3) or new projection matrix (3x4) Sets the new focal length in range between the min focal length and the max focal length.Balance is in range of[0, 1]. Divisor for new focal length. Performs camera calibaration vector of vectors of calibration pattern points in the calibration pattern coordinate space. vector of vectors of the projections of calibration pattern points. imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i. Size of the image used only to initialize the intrinsic camera matrix. Output 3x3 floating-point camera matrix Output vector of distortion coefficients (k_1, k_2, k_3, k_4). Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view. That is, each k-th rotation vector together with the corresponding k-th translation vector(see the next output parameter description) brings the calibration pattern from the model coordinate space(in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view(k= 0.. * M * -1). Output vector of translation vectors estimated for each pattern view. Different flags that may be zero or a combination of flag values Termination criteria for the iterative optimization algorithm. Stereo rectification for fisheye camera model First camera matrix. First camera distortion parameters. Second camera matrix. Second camera distortion parameters. Size of the image used for stereo calibration. Rotation matrix between the coordinate systems of the first and the second cameras. Translation vector between coordinate systems of the cameras. Output 3x3 rectification transform (rotation matrix) for the first camera. Output 3x3 rectification transform (rotation matrix) for the second camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera. Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera. Output 4x4 disparity-to-depth mapping matrix (see reprojectImageTo3D ). Operation flags that may be zero or CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views.And if the flag is not set, the function may still shift the images in the horizontal or vertical direction(depending on the orientation of epipolar lines) to maximize the useful image area. New image resolution after rectification. The same size should be passed to initUndistortRectifyMap(see the stereo_calib.cpp sample in OpenCV samples directory). When(0,0) is passed(default), it is set to the original imageSize.Setting it to larger value can help you preserve details in the original image, especially when there is a big radial distortion. Sets the new focal length in range between the min focal length and the max focal length.Balance is in range of[0, 1]. Divisor for new focal length. Performs stereo calibration Vector of vectors of the calibration pattern points. Vector of vectors of the projections of the calibration pattern points, observed by the first camera. Vector of vectors of the projections of the calibration pattern points, observed by the second camera. Input/output first camera matrix Input/output vector of distortion coefficients (k_1, k_2, k_3, k_4) of 4 elements. Input/output second camera matrix. The parameter is similar to K1 . Input/output lens distortion coefficients for the second camera. The parameter is similar to D1. Size of the image used only to initialize intrinsic camera matrix. Output rotation matrix between the 1st and the 2nd camera coordinate systems. Output translation vector between the coordinate systems of the cameras. Different flags that may be zero or a combination of the FishEyeCalibrationFlags values Termination criteria for the iterative optimization algorithm. OpenCV will try to set the number of threads for the next parallel region. If threads == 0, OpenCV will disable threading optimizations and run all it's functions sequentially.Passing threads < 0 will reset threads number to system default. This function must be called outside of parallel region. OpenCV will try to run its functions with specified threads number, but some behaviour differs from framework: - `TBB` - User-defined parallel constructions will run with the same threads number, if another is not specified.If later on user creates his own scheduler, OpenCV will use it. - `OpenMP` - No special defined behaviour. - `Concurrency` - If threads == 1, OpenCV will disable threading optimizations and run its functions sequentially. - `GCD` - Supports only values <= 0. - `C=` - No special defined behaviour. Number of threads used by OpenCV. Returns the number of threads used by OpenCV for parallel regions. Always returns 1 if OpenCV is built without threading support. The exact meaning of return value depends on the threading framework used by OpenCV library: - `TBB` - The number of threads, that OpenCV will try to use for parallel regions. If there is any tbb::thread_scheduler_init in user code conflicting with OpenCV, then function returns default number of threads used by TBB library. - `OpenMP` - An upper bound on the number of threads that could be used to form a new team. - `Concurrency` - The number of threads, that OpenCV will try to use for parallel regions. - `GCD` - Unsupported; returns the GCD thread pool limit(512) for compatibility. - `C=` - The number of threads, that OpenCV will try to use for parallel regions, if before called setNumThreads with threads > 0, otherwise returns the number of logical CPUs, available for the process. Returns the index of the currently executed thread within the current parallel region. Always returns 0 if called outside of parallel region. @deprecated Current implementation doesn't corresponding to this documentation. The exact meaning of the return value depends on the threading framework used by OpenCV library: - `TBB` - Unsupported with current 4.1 TBB release.Maybe will be supported in future. - `OpenMP` - The thread number, within the current team, of the calling thread. - `Concurrency` - An ID for the virtual processor that the current context is executing on(0 for master thread and unique number for others, but not necessary 1,2,3,...). - `GCD` - System calling thread's ID. Never returns 0 inside parallel region. - `C=` - The index of the current parallel task. Returns full configuration time cmake output. Returned value is raw cmake output including version control system revision, compiler version, compiler flags, enabled modules and third party libraries, etc.Output format depends on target architecture. Returns library version string. For example "3.4.1-dev". Returns major library version Returns minor library version Returns revision field of the library version Returns the number of ticks. The function returns the number of ticks after the certain event (for example, when the machine was turned on). It can be used to initialize RNG or to measure a function execution time by reading the tick count before and after the function call. Returns the number of ticks per second. The function returns the number of ticks per second.That is, the following code computes the execution time in seconds: Returns the number of CPU ticks. The function returns the current number of CPU ticks on some architectures(such as x86, x64, PowerPC). On other platforms the function is equivalent to getTickCount.It can also be used for very accurate time measurements, as well as for RNG initialization.Note that in case of multi-CPU systems a thread, from which getCPUTickCount is called, can be suspended and resumed at another CPU with its own counter. So, theoretically (and practically) the subsequent calls to the function do not necessary return the monotonously increasing values. Also, since a modern CPU varies the CPU frequency depending on the load, the number of CPU clocks spent in some code cannot be directly converted to time units.Therefore, getTickCount is generally a preferable solution for measuringexecution time. Returns true if the specified feature is supported by the host hardware. The function returns true if the host hardware supports the specified feature.When user calls setUseOptimized(false), the subsequent calls to checkHardwareSupport() will return false until setUseOptimized(true) is called.This way user can dynamically switch on and off the optimized code in OpenCV. The feature of interest, one of cv::CpuFeatures Returns feature name by ID. Returns empty string if feature is not defined Returns list of CPU features enabled during compilation. Returned value is a string containing space separated list of CPU features with following markers: - no markers - baseline features - prefix `*` - features enabled in dispatcher - suffix `?` - features enabled but not available in HW `SSE SSE2 SSE3* SSE4.1 *SSE4.2 *FP16* AVX *AVX2* AVX512-SKX?` Returns the number of logical CPUs available for the process. Turns on/off available optimization. The function turns on or off the optimized code in OpenCV. Some optimization can not be enabled or disabled, but, for example, most of SSE code in OpenCV can be temporarily turned on or off this way. Returns the current optimization status. The function returns the current optimization status, which is controlled by cv::setUseOptimized(). Aligns buffer size by the certain number of bytes This small inline function aligns a buffer size by the certian number of bytes by enlarging it. Sets/resets the break-on-error mode. When the break-on-error mode is set, the default error handler issues a hardware exception, which can make debugging more convenient. the previous state Computes absolute value of each matrix element matrix Computes absolute value of each matrix element matrix expression Computes the per-element sum of two arrays or an array and a scalar. The first source array The second source array. It must have the same size and same type as src1 The destination array; it will have the same size and same type as src1 The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null] Calculates per-element difference between two arrays or array and a scalar The first source array The second source array. It must have the same size and same type as src1 The destination array; it will have the same size and same type as src1 The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null] Calculates per-element difference between two arrays or array and a scalar The first source array The second source array. It must have the same size and same type as src1 The destination array; it will have the same size and same type as src1 The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null] Calculates per-element difference between two arrays or array and a scalar The first source array The second source array. It must have the same size and same type as src1 The destination array; it will have the same size and same type as src1 The optional operation mask, 8-bit single channel array; specifies elements of the destination array to be changed. [By default this is null] Calculates the per-element scaled product of two arrays The first source array The second source array of the same size and the same type as src1 The destination array; will have the same size and the same type as src1 The optional scale factor. [By default this is 1] Performs per-element division of two arrays or a scalar by an array. The first source array The second source array; should have the same size and same type as src1 The destination array; will have the same size and same type as src2 Scale factor [By default this is 1] Performs per-element division of two arrays or a scalar by an array. Scale factor The first source array The destination array; will have the same size and same type as src2 adds scaled array to another one (dst = alpha*src1 + src2) computes weighted sum of two arrays (dst = alpha*src1 + beta*src2 + gamma) Computes the source location of an extrapolated pixel. 0-based coordinate of the extrapolated pixel along one of the axes, likely <0 or >= len Length of the array along the corresponding axis. Border type, one of the #BorderTypes, except for #BORDER_TRANSPARENT and BORDER_ISOLATED. When borderType==BORDER_CONSTANT, the function always returns -1, regardless Forms a border around the image The source image The destination image; will have the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom) Specify how much pixels in each direction from the source image rectangle one needs to extrapolate Specify how much pixels in each direction from the source image rectangle one needs to extrapolate Specify how much pixels in each direction from the source image rectangle one needs to extrapolate Specify how much pixels in each direction from the source image rectangle one needs to extrapolate The border type The border value if borderType == Constant Scales, computes absolute values and converts the result to 8-bit. The source array The destination array The optional scale factor. [By default this is 1] The optional delta added to the scaled values. [By default this is 0] transforms array of numbers using a lookup table: dst(i)=lut(src(i)) Source array of 8-bit elements Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array Destination array; will have the same size and the same number of channels as src, and the same depth as lut transforms array of numbers using a lookup table: dst(i)=lut(src(i)) Source array of 8-bit elements Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array Destination array; will have the same size and the same number of channels as src, and the same depth as lut computes sum of array elements The source array; must have 1 to 4 channels computes the number of nonzero array elements Single-channel array number of non-zero elements in mtx returns the list of locations of non-zero pixels computes mean value of selected array elements The source array; it should have 1 to 4 channels (so that the result can be stored in Scalar) The optional operation mask computes mean value and standard deviation of all or selected array elements The source array; it should have 1 to 4 channels (so that the results can be stored in Scalar's) The output parameter: computed mean value The output parameter: computed standard deviation The optional operation mask computes mean value and standard deviation of all or selected array elements The source array; it should have 1 to 4 channels (so that the results can be stored in Scalar's) The output parameter: computed mean value The output parameter: computed standard deviation The optional operation mask Calculates absolute array norm, absolute difference norm, or relative difference norm. The first source array Type of the norm The optional operation mask computes norm of selected part of the difference between two arrays The first source array The second source array of the same size and the same type as src1 Type of the norm The optional operation mask naive nearest neighbor finder scales and shifts array elements so that either the specified norm (alpha) or the minimum (alpha) and maximum (beta) array values get the specified values The source array The destination array; will have the same size as src The norm value to normalize to or the lower range boundary in the case of range normalization The upper range boundary in the case of range normalization; not used for norm normalization The normalization type When the parameter is negative, the destination array will have the same type as src, otherwise it will have the same number of channels as src and the depth =CV_MAT_DEPTH(rtype) The optional operation mask finds global minimum and maximum array elements and returns their values and their locations The source single-channel array Pointer to returned minimum value Pointer to returned maximum value finds global minimum and maximum array elements and returns their values and their locations The source single-channel array Pointer to returned minimum location Pointer to returned maximum location finds global minimum and maximum array elements and returns their values and their locations The source single-channel array Pointer to returned minimum value Pointer to returned maximum value Pointer to returned minimum location Pointer to returned maximum location The optional mask used to select a sub-array finds global minimum and maximum array elements and returns their values and their locations The source single-channel array Pointer to returned minimum value Pointer to returned maximum value finds global minimum and maximum array elements and returns their values and their locations The source single-channel array finds global minimum and maximum array elements and returns their values and their locations The source single-channel array Pointer to returned minimum value Pointer to returned maximum value transforms 2D matrix to 1D row or column vector by taking sum, minimum, maximum or mean value over all the rows The source 2D matrix The destination vector. Its size and type is defined by dim and dtype parameters The dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row and 1 means that the matrix is reduced to a single column When it is negative, the destination vector will have the same type as the source matrix, otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels()) makes multi-channel array out of several single-channel arrays Copies each plane of a multi-channel array to a dedicated array The source multi-channel array The destination array or vector of arrays; The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed Copies each plane of a multi-channel array to a dedicated array The source multi-channel array The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed copies selected channels from the input arrays to the selected channels of the output arrays extracts a single channel from src (coi is 0-based index) inserts a single channel to dst (coi is 0-based index) reverses the order of the rows, columns or both in a matrix The source array The destination array; will have the same size and same type as src Specifies how to flip the array: 0 means flipping around the x-axis, positive (e.g., 1) means flipping around y-axis, and negative (e.g., -1) means flipping around both axes. See also the discussion below for the formulas. replicates the input matrix the specified number of times in the horizontal and/or vertical direction The source array to replicate How many times the src is repeated along the vertical axis How many times the src is repeated along the horizontal axis The destination array; will have the same type as src replicates the input matrix the specified number of times in the horizontal and/or vertical direction The source array to replicate How many times the src is repeated along the vertical axis How many times the src is repeated along the horizontal axis computes bitwise conjunction of the two arrays (dst = src1 & src2) computes bitwise disjunction of the two arrays (dst = src1 | src2) computes bitwise exclusive-or of the two arrays (dst = src1 ^ src2) inverts each bit of array (dst = ~src) computes element-wise absolute difference of two arrays (dst = abs(src1 - src2)) set mask elements for those array elements which are within the element-specific bounding box (dst = lowerb <= src && src < upperb) The first source array The inclusive lower boundary array of the same size and type as src The exclusive upper boundary array of the same size and type as src The destination array, will have the same size as src and CV_8U type set mask elements for those array elements which are within the element-specific bounding box (dst = lowerb <= src && src < upperb) The first source array The inclusive lower boundary array of the same size and type as src The exclusive upper boundary array of the same size and type as src The destination array, will have the same size as src and CV_8U type Performs the per-element comparison of two arrays or an array and scalar value. first input array or a scalar; when it is an array, it must have a single channel. second input array or a scalar; when it is an array, it must have a single channel. output array of type ref CV_8U that has the same size and the same number of channels as the input arrays. a flag, that specifies correspondence between the arrays (cv::CmpTypes) computes per-element minimum of two arrays (dst = min(src1, src2)) computes per-element minimum of two arrays (dst = min(src1, src2)) computes per-element minimum of array and scalar (dst = min(src1, src2)) computes per-element maximum of two arrays (dst = max(src1, src2)) computes per-element maximum of two arrays (dst = max(src1, src2)) computes per-element maximum of array and scalar (dst = max(src1, src2)) computes square root of each matrix element (dst = src**0.5) The source floating-point array The destination array; will have the same size and the same type as src raises the input matrix elements to the specified power (b = a**power) The source array The exponent of power The destination array; will have the same size and the same type as src computes exponent of each matrix element (dst = e**src) The source array The destination array; will have the same size and same type as src computes natural logarithm of absolute value of each matrix element: dst = log(abs(src)) The source array The destination array; will have the same size and same type as src computes cube root of the argument computes the angle in degrees (0..360) of the vector (x,y) converts polar coordinates to Cartesian converts Cartesian coordinates to polar computes angle (angle(i)) of each (x(i), y(i)) vector computes magnitude (magnitude(i)) of each (x(i), y(i)) vector checks that each matrix element is within the specified range. The array to check The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception. checks that each matrix element is within the specified range. The array to check The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception. The optional output parameter, where the position of the first outlier is stored. The inclusive lower boundary of valid values range The exclusive upper boundary of valid values range converts NaN's to the given number implements generalized matrix product algorithm GEMM from BLAS multiplies matrix by its transposition from the left or from the right The source matrix The destination square matrix Specifies the multiplication ordering; see the description below The optional delta matrix, subtracted from src before the multiplication. When the matrix is empty ( delta=Mat() ), it’s assumed to be zero, i.e. nothing is subtracted, otherwise if it has the same size as src, then it’s simply subtracted, otherwise it is "repeated" to cover the full src and then subtracted. Type of the delta matrix, when it's not empty, must be the same as the type of created destination matrix, see the rtype description The optional scale factor for the matrix product When it’s negative, the destination matrix will have the same type as src . Otherwise, it will have type=CV_MAT_DEPTH(rtype), which should be either CV_32F or CV_64F transposes the matrix The source array The destination array of the same type as src performs affine transformation of each element of multi-channel input matrix The source array; must have as many channels (1 to 4) as mtx.cols or mtx.cols-1 The destination array; will have the same size and depth as src and as many channels as mtx.rows The transformation matrix performs perspective transformation of each element of multi-channel input matrix The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed The destination array; it will have the same size and same type as src 3x3 or 4x4 transformation matrix performs perspective transformation of each element of multi-channel input matrix The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed 3x3 or 4x4 transformation matrix The destination array; it will have the same size and same type as src performs perspective transformation of each element of multi-channel input matrix The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed 3x3 or 4x4 transformation matrix The destination array; it will have the same size and same type as src performs perspective transformation of each element of multi-channel input matrix The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed 3x3 or 4x4 transformation matrix The destination array; it will have the same size and same type as src performs perspective transformation of each element of multi-channel input matrix The source two-channel or three-channel floating-point array; each element is 2D/3D vector to be transformed 3x3 or 4x4 transformation matrix The destination array; it will have the same size and same type as src extends the symmetrical matrix from the lower half or from the upper half Input-output floating-point square matrix If true, the lower half is copied to the upper half, otherwise the upper half is copied to the lower half initializes scaled identity matrix The matrix to initialize (not necessarily square) The value to assign to the diagonal elements computes determinant of a square matrix The input matrix; must have CV_32FC1 or CV_64FC1 type and square size determinant of the specified matrix. computes trace of a matrix The source matrix computes inverse or pseudo-inverse matrix The source floating-point MxN matrix The destination matrix; will have NxM size and the same type as src The inversion method solves linear system or a least-square problem Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). This row-vector corresponds to \f$c\f$ in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers.As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to \f$c^T\f$. `m`-by-`n+1` matrix, whose rightmost column corresponds to \f$b\f$ in formulation above and the remaining to \f$A\f$. It should containt 32- or 64-bit floating point numbers. The solution will be returned here as a column-vector - it corresponds to \f$c\f$ in the formulation above.It will contain 64-bit floating point numbers. sorts independently each matrix row or each matrix column The source single-channel array The destination array of the same size and the same type as src The operation flags, a combination of the SortFlag values sorts independently each matrix row or each matrix column The source single-channel array The destination integer array of the same size as src The operation flags, a combination of SortFlag values finds real roots of a cubic polynomial The equation coefficients, an array of 3 or 4 elements The destination array of real roots which will have 1 or 3 elements finds real and complex roots of a polynomial The array of polynomial coefficients The destination (complex) array of roots The maximum number of iterations the algorithm does Computes eigenvalues and eigenvectors of a symmetric matrix. The input matrix; must have CV_32FC1 or CV_64FC1 type, square size and be symmetric: src^T == src The output vector of eigenvalues of the same type as src; The eigenvalues are stored in the descending order. The output matrix of eigenvectors; It will have the same size and the same type as src; The eigenvectors are stored as subsequent matrix rows, in the same order as the corresponding eigenvalues computes covariation matrix of a set of samples computes covariation matrix of a set of samples computes covariation matrix of a set of samples computes covariation matrix of a set of samples PCA of the supplied dataset. input samples stored as the matrix rows or as the matrix columns. optional mean value; if the matrix is empty (noArray()), the mean is computed from the data. maximum number of components that PCA should retain; by default, all the components are retained. computes SVD of src performs back substitution for the previously computed SVD computes Mahalanobis distance between two vectors: sqrt((v1-v2)'*icovar*(v1-v2)), where icovar is the inverse covariation matrix Performs a forward Discrete Fourier transform of 1D or 2D floating-point array. The source array, real or complex The destination array, which size and type depends on the flags Transformation flags, a combination of the DftFlag2 values When the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT Performs an inverse Discrete Fourier transform of 1D or 2D floating-point array. The source array, real or complex The destination array, which size and type depends on the flags Transformation flags, a combination of the DftFlag2 values When the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT Performs forward or inverse 1D or 2D Discrete Cosine Transformation The source floating-point array The destination array; will have the same size and same type as src Transformation flags, a combination of DctFlag2 values Performs inverse 1D or 2D Discrete Cosine Transformation The source floating-point array The destination array; will have the same size and same type as src Transformation flags, a combination of DctFlag2 values computes element-wise product of the two Fourier spectrums. The second spectrum can optionally be conjugated before the multiplication computes the minimal vector size vecsize1 >= vecsize so that the dft() of the vector of length vecsize1 can be computed efficiently clusters the input data using k-Means algorithm returns the thread-local Random number generator fills array with uniformly-distributed random numbers from the range [low, high) The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels The inclusive lower boundary of the generated random numbers The exclusive upper boundary of the generated random numbers fills array with uniformly-distributed random numbers from the range [low, high) The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels The inclusive lower boundary of the generated random numbers The exclusive upper boundary of the generated random numbers fills array with normally-distributed random numbers with the specified mean and the standard deviation The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels The mean value (expectation) of the generated random numbers The standard deviation of the generated random numbers fills array with normally-distributed random numbers with the specified mean and the standard deviation The output array of random numbers. The array must be pre-allocated and have 1 to 4 channels The mean value (expectation) of the generated random numbers The standard deviation of the generated random numbers shuffles the input array elements The input/output numerical 1D array The scale factor that determines the number of random swap operations. The optional random number generator used for shuffling. If it is null, theRng() is used instead. Equivalence predicate (a boolean function of two arguments). The predicate returns true when the elements are certainly in the same class, and returns false if they may or may not be in the same class. Splits an element set into equivalency classes. Consider using GroupBy of Linq instead. Set of elements stored as a vector. Output vector of labels. It contains as many elements as vec. Each label labels[i] is a 0-based cluster index of vec[i] . Equivalence predicate (a boolean function of two arguments). The predicate returns true when the elements are certainly in the same class, and returns false if they may or may not be in the same class. Returns the number of installed CUDA-enabled devices. Use this function before any other GPU functions calls. If OpenCV is compiled without GPU support, this function returns 0. Returns the current device index set by SetDevice() or initialized by default. Sets a device and initializes it for the current thread. System index of a GPU device starting with 0. Explicitly destroys and cleans up all resources associated with the current device in the current process. Any subsequent API call to this device will reinitialize the device. Page-locks the matrix m memory and maps it for the device(s) Unmaps the memory of matrix m, and makes it pageable again. Creates continuous GPU matrix Number of rows in a 2D array. Number of columns in a 2D array. Array type. Creates continuous GPU matrix Number of rows in a 2D array. Number of columns in a 2D array. Array type. Creates continuous GPU matrix Number of rows and columns in a 2D array. Array type. Creates continuous GPU matrix Number of rows and columns in a 2D array. Array type. Ensures that size of the given matrix is not less than (rows, cols) size and matrix type is match specified one too Number of rows in a 2D array. Number of columns in a 2D array. Array type. Ensures that size of the given matrix is not less than (rows, cols) size and matrix type is match specified one too Number of rows and columns in a 2D array. Array type. Detects corners using the FAST algorithm grayscale image where keypoints (corners) are detected. threshold on difference between intensity of the central pixel and pixels of a circle around this pixel. if true, non-maximum suppression is applied to detected corners (keypoints). keypoints detected on the image. Detects corners using the FAST algorithm grayscale image where keypoints (corners) are detected. threshold on difference between intensity of the central pixel and pixels of a circle around this pixel. if true, non-maximum suppression is applied to detected corners (keypoints). one of the three neighborhoods as defined in the paper keypoints detected on the image. Detects corners using the AGAST algorithm grayscale image where keypoints (corners) are detected. threshold on difference between intensity of the central pixel and pixels of a circle around this pixel. if true, non-maximum suppression is applied to detected corners (keypoints). one of the four neighborhoods as defined in the paper keypoints detected on the image. Draw keypoints. Draws matches of keypoints from two images on output image. Draws matches of keypints from two images on output image. recallPrecisionCurve Creates a window. Name of the window in the window caption that may be used as a window identifier. Creates a window. Name of the window in the window caption that may be used as a window identifier. Flags of the window. Currently the only supported flag is CV WINDOW AUTOSIZE. If this is set, the window size is automatically adjusted to fit the displayed image (see imshow ), and the user can not change the window size manually. Destroys the specified window. Destroys all of the HighGUI windows. Waits for a pressed key. Delay in milliseconds. 0 is the special value that means ”forever” Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed. Waits for a pressed key. Similar to #waitKey, but returns full key code. Key code is implementation specific and depends on used backend: QT/GTK/Win32/etc Delay in milliseconds. 0 is the special value that means ”forever” Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed. Resizes window to the specified size Window name The new window width The new window height Moves window to the specified position Window name The new x-coordinate of the window The new y-coordinate of the window Changes parameters of a window dynamically. Name of the window. Window property to retrieve. New value of the window property. Updates window title Provides parameters of a window. Name of the window. Window property to retrieve. Sets the callback function for mouse events occuting within the specified window. Name of the window. Reference to the function to be called every time mouse event occurs in the specified window. Gets the mouse-wheel motion delta, when handling mouse-wheel events cv::EVENT_MOUSEWHEEL and cv::EVENT_MOUSEHWHEEL. For regular mice with a scroll-wheel, delta will be a multiple of 120. The value 120 corresponds to a one notch rotation of the wheel or the threshold for action to be taken and one such action should occur for each delta.Some high-precision mice with higher-resolution freely-rotating wheels may generate smaller values. For cv::EVENT_MOUSEWHEEL positive and negative values mean forward and backward scrolling, respectively.For cv::EVENT_MOUSEHWHEEL, where available, positive and negative values mean right and left scrolling, respectively. The mouse callback flags parameter. Creates a trackbar and attaches it to the specified window. The function createTrackbar creates a trackbar(a slider or range control) with the specified name and range, assigns a variable value to be a position synchronized with the trackbar and specifies the callback function onChange to be called on the trackbar position change.The created trackbar is displayed in the specified window winname. Name of the created trackbar. Name of the window that will be used as a parent of the created trackbar. Optional pointer to an integer variable whose value reflects the position of the slider.Upon creation, the slider position is defined by this variable. Maximal position of the slider. The minimal position is always 0. Pointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int, void\*); , where the first parameter is the trackbar position and the second parameter is the user data(see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated. User data that is passed as is to the callback. It can be used to handle trackbar events without using global variables. Returns the trackbar position. Name of the trackbar. Name of the window that is the parent of the trackbar. trackbar position Sets the trackbar position. Name of the trackbar. Name of the window that is the parent of trackbar. New position. Sets the trackbar maximum position. The function sets the maximum position of the specified trackbar in the specified window. Name of the trackbar. Name of the window that is the parent of trackbar. New maximum position. Sets the trackbar minimum position. The function sets the minimum position of the specified trackbar in the specified window. Name of the trackbar. Name of the window that is the parent of trackbar. New minimum position. Displays the image in the specified window Name of the window. Image to be shown. Loads an image from a file. Name of file to be loaded. Specifies color type of the loaded image Loads a multi-page image from a file. Name of file to be loaded. A vector of Mat objects holding each page, if more than one. Flag that can take values of @ref cv::ImreadModes, default with IMREAD_ANYCOLOR. Saves an image to a specified file. Name of the file. Image to be saved. Format-specific save parameters encoded as pairs Saves an image to a specified file. Name of the file. Image to be saved. Format-specific save parameters encoded as pairs Reads image from the specified buffer in memory. The input array of vector of bytes. The same flags as in imread Reads image from the specified buffer in memory. The input array of vector of bytes. The same flags as in imread Reads image from the specified buffer in memory. The input array of vector of bytes. The same flags as in imread Compresses the image and stores it in the memory buffer The file extension that defines the output format The image to be written Output buffer resized to fit the compressed image. Format-specific parameters. Compresses the image and stores it in the memory buffer The file extension that defines the output format The image to be written Output buffer resized to fit the compressed image. Format-specific parameters. Returns Gaussian filter coefficients. Aperture size. It should be odd and positive. Gaussian standard deviation. If it is non-positive, it is computed from ksize as `sigma = 0.3*((ksize-1)*0.5 - 1) + 0.8`. Type of filter coefficients. It can be CV_32F or CV_64F. Returns filter coefficients for computing spatial image derivatives. Output matrix of row filter coefficients. It has the type ktype. Output matrix of column filter coefficients. It has the type ktype. Derivative order in respect of x. Derivative order in respect of y. Aperture size. It can be CV_SCHARR, 1, 3, 5, or 7. Flag indicating whether to normalize (scale down) the filter coefficients or not. Theoretically, the coefficients should have the denominator \f$=2^{ksize*2-dx-dy-2}\f$. If you are going to filter floating-point images, you are likely to use the normalized kernels. But if you compute derivatives of an 8-bit image, store the results in a 16-bit image, and wish to preserve all the fractional bits, you may want to set normalize = false. Type of filter coefficients. It can be CV_32f or CV_64F. Smoothes image using median filter The source 1-, 3- or 4-channel image. When ksize is 3 or 5, the image depth should be CV_8U , CV_16U or CV_32F. For larger aperture sizes it can only be CV_8U The destination array; will have the same size and the same type as src The aperture linear size. It must be odd and more than 1, i.e. 3, 5, 7 ... Blurs an image using a Gaussian filter. input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F. output image of the same size and type as src. Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma* . Gaussian kernel standard deviation in X direction. Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY. pixel extrapolation method Applies bilateral filter to the image The source 8-bit or floating-point, 1-channel or 3-channel image The destination image; will have the same size and the same type as src The diameter of each pixel neighborhood, that is used during filtering. If it is non-positive, it's computed from sigmaSpace Filter sigma in the color space. Larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0 , it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace Smoothes image using box filter The source image The destination image; will have the same size and the same type as src The smoothing kernel size The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center Indicates, whether the kernel is normalized by its area or not The border mode used to extrapolate pixels outside of the image Smoothes image using normalized box filter The source image The destination image; will have the same size and the same type as src The smoothing kernel size The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center The border mode used to extrapolate pixels outside of the image Convolves an image with the kernel The source image The destination image. It will have the same size and the same number of channels as src The desired depth of the destination image. If it is negative, it will be the same as src.depth() Convolution kernel (or rather a correlation kernel), a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center The optional value added to the filtered pixels before storing them in dst The pixel extrapolation method Applies separable linear filter to an image The source image The destination image; will have the same size and the same number of channels as src The destination image depth The coefficients for filtering each row The coefficients for filtering each column The anchor position within the kernel; The default value (-1, 1) means that the anchor is at the kernel center The value added to the filtered results before storing them The pixel extrapolation method Calculates the first, second, third or mixed image derivatives using an extended Sobel operator The source image The destination image; will have the same size and the same number of channels as src The destination image depth Order of the derivative x Order of the derivative y Size of the extended Sobel kernel, must be 1, 3, 5 or 7 The optional scale factor for the computed derivative values (by default, no scaling is applied The optional delta value, added to the results prior to storing them in dst The pixel extrapolation method Calculates the first x- or y- image derivative using Scharr operator The source image The destination image; will have the same size and the same number of channels as src The destination image depth Order of the derivative x Order of the derivative y The optional scale factor for the computed derivative values (by default, no scaling is applie The optional delta value, added to the results prior to storing them in dst The pixel extrapolation method Calculates the Laplacian of an image Source image Destination image; will have the same size and the same number of channels as src The desired depth of the destination image The aperture size used to compute the second-derivative filters The optional scale factor for the computed Laplacian values (by default, no scaling is applied The optional delta value, added to the results prior to storing them in dst The pixel extrapolation method Finds edges in an image using Canny algorithm. Single-channel 8-bit input image The output edge map. It will have the same size and the same type as image The first threshold for the hysteresis procedure The second threshold for the hysteresis procedure Aperture size for the Sobel operator [By default this is ApertureSize.Size3] Indicates, whether the more accurate L2 norm should be used to compute the image gradient magnitude (true), or a faster default L1 norm is enough (false). [By default this is false] computes both eigenvalues and the eigenvectors of 2x2 derivative covariation matrix at each pixel. The output is stored as 6-channel matrix. computes another complex cornerness criteria at each pixel adjusts the corner locations with sub-pixel accuracy to maximize the certain cornerness criteria Input image. Initial coordinates of the input corners and refined coordinates provided for output. Half of the side length of the search window. Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size. Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration. finds the strong enough corners where the cornerMinEigenVal() or cornerHarris() report the local maxima Input 8-bit or floating-point 32-bit, single-channel image. Maximum number of corners to return. If there are more corners than are found, the strongest of them is returned. Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue or the Harris function response (see cornerHarris() ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01, then all the corners with the quality measure less than 15 are rejected. Minimum possible Euclidean distance between the returned corners. Optional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected. Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. Parameter indicating whether to use a Harris detector Free parameter of the Harris detector. Output vector of detected corners. Finds lines in a binary image using standard Hough transform. The 8-bit, single-channel, binary source image. The image may be modified by the function Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians The accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold ) For the multi-scale Hough transform it is the divisor for the distance resolution rho. [By default this is 0] For the multi-scale Hough transform it is the divisor for the distance resolution theta. [By default this is 0] The output vector of lines. Each line is represented by a two-element vector (rho, theta) . rho is the distance from the coordinate origin (0,0) (top-left corner of the image) and theta is the line rotation angle in radians Finds lines segments in a binary image using probabilistic Hough transform. Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians The accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold ) The minimum line length. Line segments shorter than that will be rejected. [By default this is 0] The maximum allowed gap between points on the same line to link them. [By default this is 0] The output lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) Finds circles in a grayscale image using a Hough transform. The 8-bit, single-channel, grayscale input image Currently, the only implemented method is HoughCirclesMethod.Gradient The inverse ratio of the accumulator resolution to the image resolution. Minimum distance between the centers of the detected circles. The first method-specific parameter. [By default this is 100] The second method-specific parameter. [By default this is 100] Minimum circle radius. [By default this is 0] Maximum circle radius. [By default this is 0] The output vector found circles. Each vector is encoded as 3-element floating-point vector (x, y, radius) Default borderValue for Dilate/Erode Dilates an image by using a specific structuring element. The source image The destination image. It will have the same size and the same type as src The structuring element used for dilation. If element=new Mat() , a 3x3 rectangular structuring element is used Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center The number of times dilation is applied. [By default this is 1] The pixel extrapolation method. [By default this is BorderType.Constant] The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()] Erodes an image by using a specific structuring element. The source image The destination image. It will have the same size and the same type as src The structuring element used for dilation. If element=new Mat(), a 3x3 rectangular structuring element is used Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center The number of times erosion is applied The pixel extrapolation method The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()] Performs advanced morphological transformations Source image Destination image. It will have the same size and the same type as src Type of morphological operation Structuring element Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center Number of times erosion and dilation are applied. [By default this is 1] The pixel extrapolation method. [By default this is BorderType.Constant] The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()] Resizes an image. input image. output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src. output image size; if it equals zero, it is computed as: dsize = Size(round(fx*src.cols), round(fy*src.rows)) Either dsize or both fx and fy must be non-zero. scale factor along the horizontal axis; when it equals 0, it is computed as: (double)dsize.width/src.cols scale factor along the vertical axis; when it equals 0, it is computed as: (double)dsize.height/src.rows interpolation method Applies an affine transformation to an image. input image. output image that has the size dsize and the same type as src. 2x3 transformation matrix. size of the output image. combination of interpolation methods and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation (dst -> src) . pixel extrapolation method; when borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function. value used in case of a constant border; by default, it is 0. Applies a perspective transformation to an image. input image. output image that has the size dsize and the same type as src. 3x3 transformation matrix. size of the output image. combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src). pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE). value used in case of a constant border; by default, it equals 0. Applies a perspective transformation to an image. input image. output image that has the size dsize and the same type as src. 3x3 transformation matrix. size of the output image. combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src). pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE). value used in case of a constant border; by default, it equals 0. Applies a generic geometrical transformation to an image. Source image. Destination image. It has the same size as map1 and the same type as src The first map of either (x,y) points or just x values having the type CV_16SC2, CV_32FC1, or CV_32FC2. The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively. Interpolation method. The method INTER_AREA is not supported by this function. Pixel extrapolation method. When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function. Value used in case of a constant border. By default, it is 0. Inverts an affine transformation. Original affine transformation. Output reverse affine transformation. Retrieves a pixel rectangle from an image with sub-pixel accuracy. Source image. Size of the extracted patch. Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image. Extracted patch that has the size patchSize and the same number of channels as src . Depth of the extracted pixels. By default, they have the same depth as src. Remaps an image to log-polar space. Source image Destination image The transformation center; where the output precision is maximal Magnitude scale parameter. A combination of interpolation methods, see cv::InterpolationFlags Remaps an image to polar space. Source image Destination image The transformation center Inverse magnitude scale parameter A combination of interpolation methods, see cv::InterpolationFlags Adds an image to the accumulator. Input image as 1- or 3-channel, 8-bit or 32-bit floating point. Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. Optional operation mask. Adds the square of a source image to the accumulator. Input image as 1- or 3-channel, 8-bit or 32-bit floating point. Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. Optional operation mask. Adds the per-element product of two input images to the accumulator. First input image, 1- or 3-channel, 8-bit or 32-bit floating point. Second input image of the same type and the same size as src1 Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point. Optional operation mask. Updates a running average. Input image as 1- or 3-channel, 8-bit or 32-bit floating point. Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. Weight of the input image. Optional operation mask. Computes a Hanning window coefficients in two dimensions. Destination array to place Hann coefficients in The window size specifications Created array type Applies a fixed-level threshold to each array element. input array (single-channel, 8-bit or 32-bit floating point). output array of the same size and type as src. threshold value. maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. thresholding type (see the details below). the computed threshold value when type == OTSU Applies an adaptive threshold to an array. Source 8-bit single-channel image. Destination image of the same size and the same type as src . Non-zero value assigned to the pixels for which the condition is satisfied. See the details below. Adaptive thresholding algorithm to use, ADAPTIVE_THRESH_MEAN_C or ADAPTIVE_THRESH_GAUSSIAN_C . Thresholding type that must be either THRESH_BINARY or THRESH_BINARY_INV . Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. Constant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well. Blurs an image and downsamples it. input image. output image; it has the specified size and the same type as src. size of the output image; by default, it is computed as Size((src.cols+1)/2 Upsamples an image and then blurs it. input image. output image. It has the specified size and the same type as src. size of the output image; by default, it is computed as Size(src.cols*2, (src.rows*2) computes the joint dense histogram for a set of images. computes the joint dense histogram for a set of images. computes the joint dense histogram for a set of images. compares two histograms stored in dense arrays The first compared histogram The second compared histogram of the same size as h1 The comparison method normalizes the grayscale image brightness and contrast by normalizing its histogram The source 8-bit single channel image The destination image; will have the same size and the same type as src Creates a predefined CLAHE object Performs a marker-based image segmentation using the watershed algorithm. Input 8-bit 3-channel image. Input/output 32-bit single-channel image (map) of markers. It should have the same size as image. Performs initial step of meanshift segmentation of an image. The source 8-bit, 3-channel image. The destination image of the same format and the same size as the source. The spatial window radius. The color window radius. Maximum level of the pyramid for the segmentation. Termination criteria: when to stop meanshift iterations. Segments the image using GrabCut algorithm Input 8-bit 3-channel image. Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have Cv2.GC_BGD / Cv2.GC_FGD / Cv2.GC_PR_BGD / Cv2.GC_PR_FGD ROI containing a segmented object. The pixels outside of the ROI are marked as "obvious background". The parameter is only used when mode==GC_INIT_WITH_RECT. Temporary array for the background model. Do not modify it while you are processing the same image. Temporary arrays for the foreground model. Do not modify it while you are processing the same image. Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==GC_INIT_WITH_MASK or mode==GC_EVAL . Operation mode that could be one of GrabCutFlag value. builds the discrete Voronoi diagram computes the distance transform map Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. Starting point. New value of the repainted domain pixels. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. Starting point. New value of the repainted domain pixels. Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. Starting point. New value of the repainted domain pixels. Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. (For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap. Starting point. New value of the repainted domain pixels. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. (For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap. Starting point. New value of the repainted domain pixels. Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. (For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap. Starting point. New value of the repainted domain pixels. Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Converts image from one color space to another The source image, 8-bit unsigned, 16-bit unsigned or single-precision floating-point The destination image; will have the same size and the same depth as src The color space conversion code The number of channels in the destination image; if the parameter is 0, the number of the channels will be derived automatically from src and the code Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (single-channel, 8-bit or floating-point 2D array) or an array ( 1xN or Nx1 ) of 2D points ( Point or Point2f ) If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (8-bit) 2D array If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (floating-point) 2D array If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. Array of 2D points If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. Array of 2D points If it is true, then all the non-zero image pixels are treated as 1’s Computes the proximity map for the raster template and the image where the template is searched for Image where the search is running; should be 8-bit or 32-bit floating-point Searched template; must be not greater than the source image and have the same data type A map of comparison results; will be single-channel 32-bit floating-point. If image is WxH and templ is wxh then result will be (W-w+1) x (H-h+1). Specifies the comparison method Mask of searched template. It must have the same datatype and size with templ. It is not set by default. computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. the image to be labeled destination labeled image 8 or 4 for 8-way or 4-way connectivity respectively The number of labels computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. the image to be labeled destination labeled image 8 or 4 for 8-way or 4-way connectivity respectively output image label type. Currently CV_32S and CV_16U are supported. The number of labels computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. the image to be labeled destination labeled rectangular array 8 or 4 for 8-way or 4-way connectivity respectively The number of labels computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. the image to be labeled destination labeled image statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes floating point centroid (x,y) output for each label, including the background label 8 or 4 for 8-way or 4-way connectivity respectively computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. the image to be labeled destination labeled image statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes floating point centroid (x,y) output for each label, including the background label 8 or 4 for 8-way or 4-way connectivity respectively output image label type. Currently CV_32S and CV_16U are supported. computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. the image to be labeled 8 or 4 for 8-way or 4-way connectivity respectively Finds contours in a binary image. Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours. Detected contours. Each contour is stored as a vector of points. Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Finds contours in a binary image. Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours. Detected contours. Each contour is stored as a vector of points. Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Finds contours in a binary image. Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Detected contours. Each contour is stored as a vector of points. Finds contours in a binary image. Source, an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies the image while extracting the contours. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Detected contours. Each contour is stored as a vector of points. Approximates contour or a curve using Douglas-Peucker algorithm The polygon or curve to approximate. Must be 1 x N or N x 1 matrix of type CV_32SC2 or CV_32FC2. The result of the approximation; The type should match the type of the input curve Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation. The result of the approximation; The type should match the type of the input curve Approximates contour or a curve using Douglas-Peucker algorithm The polygon or curve to approximate. Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation. The result of the approximation; The type should match the type of the input curve The result of the approximation; The type should match the type of the input curve Approximates contour or a curve using Douglas-Peucker algorithm The polygon or curve to approximate. Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation. If true, the approximated curve is closed (i.e. its first and last vertices are connected), otherwise it’s not The result of the approximation; The type should match the type of the input curve Calculates a contour perimeter or a curve length. The input vector of 2D points, represented by CV_32SC2 or CV_32FC2 matrix. Indicates, whether the curve is closed or not. Calculates a contour perimeter or a curve length. The input vector of 2D points. Indicates, whether the curve is closed or not. Calculates a contour perimeter or a curve length. The input vector of 2D points. Indicates, whether the curve is closed or not. Calculates the up-right bounding rectangle of a point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Minimal up-right bounding rectangle for the specified point set. Calculates the up-right bounding rectangle of a point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Minimal up-right bounding rectangle for the specified point set. Calculates the up-right bounding rectangle of a point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Minimal up-right bounding rectangle for the specified point set. Calculates the contour area The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix Calculates the contour area The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix Calculates the contour area The contour vertices, represented by CV_32SC2 or CV_32FC2 matrix Finds the minimum area rotated rectangle enclosing a 2D point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Finds the minimum area rotated rectangle enclosing a 2D point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Finds the minimum area rotated rectangle enclosing a 2D point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Finds the minimum area circle enclosing a 2D point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. The output center of the circle The output radius of the circle Finds the minimum area circle enclosing a 2D point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. The output center of the circle The output radius of the circle Finds the minimum area circle enclosing a 2D point set. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. The output center of the circle The output radius of the circle matches two contours using one of the available algorithms matches two contours using one of the available algorithms Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix The output convex hull. It is either a vector of points that form the hull (must have the same type as the input points), or a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set). If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of points that form the hull (must have the same type as the input points). Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of points that form the hull (must have the same type as the input points). Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set). Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set). Computes the contour convexity defects Input contour. Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull. The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. Computes the contour convexity defects Input contour. Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull. The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. Computes the contour convexity defects Input contour. Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull. The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. returns true if the contour is convex. Does not support contours with self-intersection Input vector of 2D points returns true if the contour is convex. Does not support contours with self-intersection Input vector of 2D points returns true if the contour is convex. D oes not support contours with self-intersection Input vector of 2D points finds intersection of two convex polygons finds intersection of two convex polygons finds intersection of two convex polygons Fits ellipse to the set of 2D points. Input 2D point set Fits ellipse to the set of 2D points. Input 2D point set Fits ellipse to the set of 2D points. Input 2D point set Fits line to the set of 2D points using M-estimator algorithm Input vector of 2D or 3D points Output line parameters. In case of 2D fitting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D fitting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line. Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Fits line to the set of 2D points using M-estimator algorithm Input vector of 2D or 3D points Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Output line parameters. Fits line to the set of 2D points using M-estimator algorithm Input vector of 2D or 3D points Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Output line parameters. Fits line to the set of 3D points using M-estimator algorithm Input vector of 2D or 3D points Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Output line parameters. Fits line to the set of 3D points using M-estimator algorithm Input vector of 2D or 3D points Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Output line parameters. Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary. Input contour. Point tested against the contour. If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not. Positive (inside), negative (outside), or zero (on an edge) value. Finds out if there is any intersection between two rotated rectangles. If there is then the vertices of the interesecting region are returned as well. Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function. First rectangle Second rectangle The output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as std::vector<cv::Point2f> or cv::Mat as Mx1 of type CV_32FC2. Finds out if there is any intersection between two rotated rectangles. If there is then the vertices of the interesecting region are returned as well. Below are some examples of intersection configurations. The hatched pattern indicates the intersecting region and the red vertices are returned by the function. First rectangle Second rectangle The output array of the verticies of the intersecting region. It returns at most 8 vertices. Applies a GNU Octave/MATLAB equivalent colormap on a given image. Draws a line segment connecting two points The image. First point's x-coordinate of the line segment. First point's y-coordinate of the line segment. Second point's x-coordinate of the line segment. Second point's y-coordinate of the line segment. Line color. Line thickness. [By default this is 1] Type of the line. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws a line segment connecting two points The image. First point of the line segment. Second point of the line segment. Line color. Line thickness. [By default this is 1] Type of the line. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws a arrow segment pointing from the first point to the second one. The function arrowedLine draws an arrow between pt1 and pt2 points in the image. See also cv::line. Image. The point the arrow starts from. The point the arrow points to. Line color. Line thickness. Type of the line, see cv::LineTypes Number of fractional bits in the point coordinates. The length of the arrow tip in relation to the arrow length Draws simple, thick or filled rectangle Image. One of the rectangle vertices. Opposite rectangle vertex. Line color (RGB) or brightness (grayscale image). Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1] Type of the line, see cvLine description. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws simple, thick or filled rectangle Image. Rectangle. Line color (RGB) or brightness (grayscale image). Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1] Type of the line, see cvLine description. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws simple, thick or filled rectangle Image. One of the rectangle vertices. Opposite rectangle vertex. Line color (RGB) or brightness (grayscale image). Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1] Type of the line, see cvLine description. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws simple, thick or filled rectangle Image. Rectangle. Line color (RGB) or brightness (grayscale image). Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1] Type of the line, see cvLine description. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws a circle Image where the circle is drawn. X-coordinate of the center of the circle. Y-coordinate of the center of the circle. Radius of the circle. Circle color. Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1] Type of the circle boundary. [By default this is LineType.Link8] Number of fractional bits in the center coordinates and radius value. [By default this is 0] Draws a circle Image where the circle is drawn. Center of the circle. Radius of the circle. Circle color. Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1] Type of the circle boundary. [By default this is LineType.Link8] Number of fractional bits in the center coordinates and radius value. [By default this is 0] Draws simple or thick elliptic arc or fills ellipse sector Image. Center of the ellipse. Length of the ellipse axes. Rotation angle. Starting angle of the elliptic arc. Ending angle of the elliptic arc. Ellipse color. Thickness of the ellipse arc. [By default this is 1] Type of the ellipse boundary. [By default this is LineType.Link8] Number of fractional bits in the center coordinates and axes' values. [By default this is 0] Draws simple or thick elliptic arc or fills ellipse sector Image. The enclosing box of the ellipse drawn Ellipse color. Thickness of the ellipse boundary. [By default this is 1] Type of the ellipse boundary. [By default this is LineType.Link8] Fills a convex polygon. Image The polygon vertices Polygon color Type of the polygon boundaries The number of fractional bits in the vertex coordinates Fills a convex polygon. Image The polygon vertices Polygon color Type of the polygon boundaries The number of fractional bits in the vertex coordinates Fills the area bounded by one or more polygons Image Array of polygons, each represented as an array of points Polygon color Type of the polygon boundaries The number of fractional bits in the vertex coordinates Fills the area bounded by one or more polygons Image Array of polygons, each represented as an array of points Polygon color Type of the polygon boundaries The number of fractional bits in the vertex coordinates draws one or more polygonal curves draws one or more polygonal curves draws contours in the image Destination image. All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours. Thickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn. Line connectivity. Optional information about hierarchy. It is only needed if you want to draw only some of the contours Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available. Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy) draws contours in the image Destination image. All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours. Thickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn. Line connectivity. Optional information about hierarchy. It is only needed if you want to draw only some of the contours Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available. Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy) Clips the line against the image rectangle The image size The first line point The second line point Clips the line against the image rectangle sThe image rectangle The first line point The second line point Approximates an elliptic arc with a polyline. The function ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc. It is used by cv::ellipse. Center of the arc. Half of the size of the ellipse main axes. See the ellipse for details. Rotation angle of the ellipse in degrees. See the ellipse for details. Starting angle of the elliptic arc in degrees. Ending angle of the elliptic arc in degrees. Angle between the subsequent polyline vertices. It defines the approximation Output vector of polyline vertices. renders text string in the image returns bounding box of the text string Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Relative difference between sides of the rectangles to merge them into a group. Groups the object candidate rectangles. Groups the object candidate rectangles. IListの要素にvaluesを設定する restores the damaged image areas using one of the available intpainting algorithms Perform image denoising using Non-local Means Denoising algorithm with several computational optimizations. Noise expected to be a gaussian white noise Input 8-bit 1-channel, 2-channel or 3-channel image. Output image with the same size and type as src . Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels Modification of fastNlMeansDenoising function for colored images Input 8-bit 3-channel image. Output image with the same size and type as src. Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise The same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels Modification of fastNlMeansDenoising function for images sequence where consequtive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. Input 8-bit 1-channel, 2-channel or 3-channel images sequence. All images should have the same type and size. Output image with the same size and type as srcImgs images. Target image to denoise index in srcImgs sequence Number of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image. Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels Modification of fastNlMeansDenoising function for images sequence where consequtive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. Input 8-bit 1-channel, 2-channel or 3-channel images sequence. All images should have the same type and size. Output image with the same size and type as srcImgs images. Target image to denoise index in srcImgs sequence Number of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image. Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels Modification of fastNlMeansDenoisingMulti function for colored images sequences Input 8-bit 3-channel images sequence. All images should have the same type and size. Output image with the same size and type as srcImgs images. Target image to denoise index in srcImgs sequence Number of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image. Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. The same as h but for color components. Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels Modification of fastNlMeansDenoisingMulti function for colored images sequences Input 8-bit 3-channel images sequence. All images should have the same type and size. Output image with the same size and type as srcImgs images. Target image to denoise index in srcImgs sequence Number of surrounding images to use for target image denoising. Should be odd. Images from imgToDenoiseIndex - temporalWindowSize / 2 to imgToDenoiseIndex - temporalWindowSize / 2 from srcImgs will be used to denoise srcImgs[imgToDenoiseIndex] image. Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. The same as h but for color components. Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixels Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented. This array should contain one or more noised versions of the image that is to be restored. Here the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary. Corresponds to \f$\lambda\f$ in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed. Number of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor. Transforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications @cite CL12 . Input 8-bit 3-channel image. Output 8-bit 1-channel image. Output 8-bit 3-channel image. Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned to a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by novel content @cite PM03 . Input 8-bit 3-channel image. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Point in dst image where object is placed. Output image with the same size and type as dst. Cloning method Given an original color image, two differently colored versions of this image can be mixed seamlessly. Multiplication factor is between 0.5 to 2.5. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. R-channel multiply factor. G-channel multiply factor. B-channel multiply factor. Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. Value ranges between 0-2. Value ranges between 0-2. This is useful to highlight under-exposed foreground objects or to reduce specular reflections. By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used. Input 8-bit 3-channel image. Input 8-bit 1 or 3-channel image. Output image with the same size and type as src. Range from 0 to 100. Value > 100. The size of the Sobel kernel to be used. Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications @cite EM11 . Input 8-bit 3-channel image. Output 8-bit 3-channel image. Edge preserving filters Range between 0 to 200. Range between 0 to 1. This filter enhances the details of a particular image. Input 8-bit 3-channel image. Output image with the same size and type as src. Range between 0 to 200. Range between 0 to 1. Pencil-like non-photorealistic line drawing Input 8-bit 3-channel image. Output 8-bit 1-channel image. Output image with the same size and type as src. Range between 0 to 200. Range between 0 to 1. Range between 0 to 0.1. Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features. Input 8-bit 3-channel image. Output image with the same size and type as src. Range between 0 to 200. Range between 0 to 1. Create Bilateral TV-L1 Super Resolution. Create Bilateral TV-L1 Super Resolution. Create Bilateral TV-L1 Super Resolution. Finds an object center, size, and orientation. Back projection of the object histogram. Initial search window. Stop criteria for the underlying MeanShift() . Finds an object on a back projection image. Back projection of the object histogram. Initial search window. Stop criteria for the iterative search algorithm. Number of iterations CAMSHIFT took to converge. Constructs a pyramid which can be used as input for calcOpticalFlowPyrLK 8-bit input image. output pyramid. window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK(). It is needed to calculate required padding for pyramid levels. 0-based maximal pyramid level number. set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK() will calculate them internally. the border mode for pyramid layers. the border mode for gradients. put ROI of input image into the pyramid if possible. You can pass false to force data copying. number of levels in constructed pyramid. Can be less than maxLevel. Constructs a pyramid which can be used as input for calcOpticalFlowPyrLK 8-bit input image. output pyramid. window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK(). It is needed to calculate required padding for pyramid levels. 0-based maximal pyramid level number. set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK() will calculate them internally. the border mode for pyramid layers. the border mode for gradients. put ROI of input image into the pyramid if possible. You can pass false to force data copying. number of levels in constructed pyramid. Can be less than maxLevel. computes sparse optical flow using multi-scale Lucas-Kanade algorithm computes sparse optical flow using multi-scale Lucas-Kanade algorithm Computes a dense optical flow using the Gunnar Farneback's algorithm. first 8-bit single-channel input image. second input image of the same size and the same type as prev. computed flow image that has the same size as prev and type CV_32FC2. parameter, specifying the image scale (<1) to build pyramids for each image; pyrScale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one. number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used. averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field. number of iterations the algorithm does at each pyramid level. size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7. standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for polyN=5, you can set polySigma=1.1, for polyN=7, a good value would be polySigma=1.5. operation flags that can be a combination of OPTFLOW_USE_INITIAL_FLOW and/or OPTFLOW_FARNEBACK_GAUSSIAN Estimates the best-fit Euqcidean, similarity, affine or perspective transformation that maps one 2D point set to another or one image to another. First input 2D point set stored in std::vector or Mat, or an image stored in Mat. Second input 2D point set of the same size and the same type as A, or another image. If true, the function finds an optimal affine transformation with no additional restrictions (6 degrees of freedom). Otherwise, the class of transformations to choose from is limited to combinations of translation, rotation, and uniform scaling (5 degrees of freedom). A class which has a pointer of OpenCV structure Data pointer Default constructor Native pointer of OpenCV structure DisposableObject + ICvPtrHolder Data pointer Default constructor releases unmanaged resources Native pointer of OpenCV structure Represents a class which manages its own memory. Gets or sets a handle which allocates using cvSetData. Gets a value indicating whether this instance has been disposed. Gets or sets a value indicating whether you permit disposing this instance. Gets or sets a memory address allocated by AllocMemory. Gets or sets the byte length of the allocated memory Default constructor Constructor true if you permit disposing this class by GC Releases the resources Releases the resources If disposing equals true, the method has been called directly or indirectly by a user's code. Managed and unmanaged resources can be disposed. If false, the method has been called by the runtime from inside the finalizer and you should not reference other objects. Only unmanaged resources can be disposed. Destructor Releases managed resources Releases unmanaged resources Pins the object to be allocated by cvSetData. Allocates the specified size of memory. Notifies the allocated size of memory. If this object is disposed, then ObjectDisposedException is thrown. Represents a OpenCV-based class which has a native pointer. Unmanaged OpenCV data pointer The default exception to be thrown by OpenCV The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered Constructor The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered The exception that is thrown by OpenCvSharp. Template class for smart reference-counting pointers Constructor Returns Ptr<T>.get() pointer aruco module Basic marker detection input image indicates the type of markers that will be searched vector of detected marker corners. For each marker, its four corners are provided. For N detected markers, the dimensions of this array is Nx4.The order of the corners is clockwise. vector of identifiers of the detected markers. The identifier is of type int. For N detected markers, the size of ids is also N. The identifiers have the same order than the markers in the imgPoints array. marker detection parameters contains the imgPoints of those squares whose inner code has not a correct codification.Useful for debugging purposes. Pose estimation for single markers corners vector of already detected markers corners. For each marker, its four corners are provided, (e.g std::vector<std::vector<cv::Point2f>> ). For N detected markers, the dimensions of this array should be Nx4. The order of the corners should be clockwise. the length of the markers' side. The returning translation vectors will be in the same unit.Normally, unit is meters. input 3x3 floating-point camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ vector of distortion coefficients \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements array of output rotation vectors (@sa Rodrigues) (e.g. std::vector<cv::Vec3d>). Each element in rvecs corresponds to the specific marker in imgPoints. array of output translation vectors (e.g. std::vector<cv::Vec3d>). Each element in tvecs corresponds to the specific marker in imgPoints. array of object points of all the marker corners Draw detected markers in image input/output image. It must have 1 or 3 channels. The number of channels is not altered. positions of marker corners on input image. For N detected markers, the dimensions of this array should be Nx4.The order of the corners should be clockwise. vector of identifiers for markers in markersCorners. Optional, if not provided, ids are not painted. Draw detected markers in image input/output image. It must have 1 or 3 channels. The number of channels is not altered. positions of marker corners on input image. For N detected markers, the dimensions of this array should be Nx4.The order of the corners should be clockwise. vector of identifiers for markers in markersCorners. Optional, if not provided, ids are not painted. color of marker borders. Rest of colors (text color and first corner color) are calculated based on this one to improve visualization. Draw a canonical marker image dictionary of markers indicating the type of markers identifier of the marker that will be returned. It has to be a valid id in the specified dictionary. size of the image in pixels output image with the marker width of the marker border. Returns one of the predefined dictionaries defined in PREDEFINED_DICTIONARY_NAME Parameters for the detectMarker process cv::Ptr<T> Releases managed resources minimum window size for adaptive thresholding before finding contours (default 3). adaptiveThreshWinSizeMax: maximum window size for adaptive thresholding before finding contours(default 23). increments from adaptiveThreshWinSizeMin to adaptiveThreshWinSizeMax during the thresholding(default 10). constant for adaptive thresholding before finding contours (default 7) determine minimum perimeter for marker contour to be detected. This is defined as a rate respect to the maximum dimension of the input image(default 0.03). determine maximum perimeter for marker contour to be detected. This is defined as a rate respect to the maximum dimension of the input image(default 4.0). minimum accuracy during the polygonal approximation process to determine which contours are squares. minimum distance between corners for detected markers relative to its perimeter(default 0.05) minimum distance of any corner to the image border for detected markers (in pixels) (default 3) minimum mean distance beetween two marker corners to be considered similar, so that the smaller one is removed.The rate is relative to the smaller perimeter of the two markers(default 0.05). corner refinement method. (CORNER_REFINE_NONE, no refinement. CORNER_REFINE_SUBPIX, do subpixel refinement. CORNER_REFINE_CONTOUR use contour-Points) window size for the corner refinement process (in pixels) (default 5). maximum number of iterations for stop criteria of the corner refinement process(default 30). minimum error for the stop cristeria of the corner refinement process(default: 0.1) number of bits of the marker border, i.e. marker border width (default 1). number of bits (per dimension) for each cell of the marker when removing the perspective(default 8). width of the margin of pixels on each cell not considered for the determination of the cell bit.Represents the rate respect to the total size of the cell, i.e.perpectiveRemovePixelPerCell (default 0.13) maximum number of accepted erroneous bits in the border (i.e. number of allowed white bits in the border). Represented as a rate respect to the total number of bits per marker(default 0.35). minimun standard deviation in pixels values during the decodification step to apply Otsu thresholding(otherwise, all the bits are set to 0 or 1 depending on mean higher than 128 or not) (default 5.0) errorCorrectionRate error correction rate respect to the maximun error correction capability for each dictionary. (default 0.6). Dictionary/Set of markers. It contains the inner codification cv::Ptr<T> Releases managed resources Marker code information Number of bits per dimension. Maximum number of bits that can be corrected. corner refinement method default corners refine the corners using subpix refine the corners using the contour-points PredefinedDictionaryName Background Subtractor module. Takes a series of images and returns a sequence of mask (8UC1) images of the same size, where 255 indicates Foreground and 0 represents Background. cv::Ptr<T> Releases managed resources Gaussian Mixture-based Backbround/Foreground Segmentation Algorithm cv::Ptr<T> Releases managed resources Different flags for cvCalibrateCamera2 and cvStereoCalibrate The flag allows the function to optimize some or all of the intrinsic parameters, depending on the other flags, but the initial values are provided by the user fyk is optimized, but the ratio fxk/fyk is fixed. The principal points are fixed during the optimization. Tangential distortion coefficients are set to zeros and do not change during the optimization. fxk and fyk are fixed. The 0-th distortion coefficients (k1) are fixed The 1-th distortion coefficients (k2) are fixed The 4-th distortion coefficients (k3) are fixed Do not change the corresponding radial distortion coefficient during the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used, otherwise it is set to 0. Do not change the corresponding radial distortion coefficient during the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used, otherwise it is set to 0. Do not change the corresponding radial distortion coefficient during the optimization. If CV_CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used, otherwise it is set to 0. Enable coefficients k4, k5 and k6. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients. If the flag is not set, the function will compute only 5 distortion coefficients. If it is set, camera_matrix1,2, as well as dist_coeffs1,2 are fixed, so that only extrinsic parameters are optimized. Enforces fx0=fx1 and fy0=fy1. CV_CALIB_ZERO_TANGENT_DIST - Tangential distortion coefficients for each camera are set to zeros and fixed there. for stereo rectification Various operation flags for cvFindChessboardCorners Use adaptive thresholding to convert the image to black-n-white, rather than a fixed threshold level (computed from the average image brightness). Normalize the image using cvNormalizeHist before applying fixed or adaptive thresholding. Use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads that are extracted at the contour retrieval stage. Method for solving a PnP problem: uses symmetric pattern of circles. uses asymmetric pattern of circles. uses a special algorithm for grid detection. It is more robust to perspective distortions but much more sensitive to background clutter. Method for computing the fundamental matrix for 7-point algorithm. N == 7 for 8-point algorithm. N >= 8 [CV_FM_8POINT] for LMedS algorithm. N > 8 for RANSAC algorithm. N > 8 The method used to computed homography matrix Regular method using all the point pairs Least-Median robust method RANSAC-based robust method RHO algorithm type of the robust estimation algorithm least-median of squares algorithm RANSAC algorithm RHO algorithm Method for solving a PnP problem: Iterative method is based on Levenberg-Marquardt optimization. In this case the function finds such a pose that minimizes reprojection error, that is the sum of squared distances between the observed projections imagePoints and the projected (using projectPoints() ) objectPoints . Method has been introduced by F.Moreno-Noguer, V.Lepetit and P.Fua in the paper “EPnP: Efficient Perspective-n-Point Camera Pose Estimation”. Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang“Complete Solution Classification for the Perspective-Three-Point Problem”. In this case the function requires exactly four object and image points. Joel A. Hesch and Stergios I. Roumeliotis. "A Direct Least-Squares (DLS) Method for PnP" A.Penate-Sanchez, J.Andrade-Cetto, F.Moreno-Noguer. "Exhaustive Linearization for Robust Camera Pose and Focal Length Estimation" The operation flags for cvStereoRectify Default value (=0). the function can shift one of the image in horizontal or vertical direction (depending on the orientation of epipolar lines) in order to maximise the useful image area. the function makes the principal points of each camera have the same pixel coordinates in the rectified views. Semi-Global Stereo Matching constructor Releases managed resources The base class for stereo correspondence algorithms. constructor Computes disparity map for the specified stereo pair Left 8-bit single-channel image. Right image of the same size and the same type as the left one. Output disparity map. It has the same size as the input images. Some algorithms, like StereoBM or StereoSGBM compute 16-bit fixed-point disparity map(where each disparity value has 4 fractional bits), whereas other algorithms output 32 - bit floating - point disparity map. Semi-Global Stereo Matching constructor Releases managed resources Base class for high-level OpenCV algorithms Stores algorithm parameters in a file storage Reads algorithm parameters from a file storage Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage fs). Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Error Handler The numeric code for error status The source file name where error is encountered A description of the error The source file name where error is encountered The line number in the souce where error is encountered Pointer to the user data. Ignored by the standard handlers cv::Algorithm parameter type The flag specifying the relation between the elements to be checked src1(I) "equal to" src2(I) src1(I) "greater than" src2(I) src1(I) "greater or equal" src2(I) src1(I) "less than" src2(I) src1(I) "less or equal" src2(I) src1(I) "not equal to" src2(I) Operation flags for Covariation scale * [vects[0]-avg,vects[1]-avg,...]^T * [vects[0]-avg,vects[1]-avg,...] that is, the covariation matrix is count×count. Such an unusual covariation matrix is used for fast PCA of a set of very large vectors (see, for example, Eigen Faces technique for face recognition). Eigenvalues of this "scrambled" matrix will match to the eigenvalues of the true covariation matrix and the "true" eigenvectors can be easily calculated from the eigenvectors of the "scrambled" covariation matrix. scale * [vects[0]-avg,vects[1]-avg,...]*[vects[0]-avg,vects[1]-avg,...]^T that is, cov_mat will be a usual covariation matrix with the same linear size as the total number of elements in every input vector. One and only one of CV_COVAR_SCRAMBLED and CV_COVAR_NORMAL must be specified If the flag is specified, the function does not calculate avg from the input vectors, but, instead, uses the passed avg vector. This is useful if avg has been already calculated somehow, or if the covariation matrix is calculated by parts - in this case, avg is not a mean vector of the input sub-set of vectors, but rather the mean vector of the whole set. If the flag is specified, the covariation matrix is scaled by the number of input vectors. Means that all the input vectors are stored as rows of a single matrix, vects[0].count is ignored in this case, and avg should be a single-row vector of an appropriate size. Means that all the input vectors are stored as columns of a single matrix, vects[0].count is ignored in this case, and avg should be a single-column vector of an appropriate size. Type of termination criteria the maximum number of iterations or elements to compute the maximum number of iterations or elements to compute the desired accuracy or change in parameters at which the iterative algorithm stops Transformation flags for cv::dct Do inverse 1D or 2D transform. (Forward and Inverse are mutually exclusive, of course.) Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc. [CV_DXT_ROWS] Inversion methods Gaussian elimination with the optimal pivot element chosen. singular value decomposition (SVD) method; the system can be over-defined and/or the matrix src1 can be singular eigenvalue decomposition; the matrix src1 must be symmetrical Cholesky \f$LL^T\f$ factorization; the matrix src1 must be symmetrical and positively defined QR factorization; the system can be over-defined and/or the matrix src1 can be singular while all the previous flags are mutually exclusive, this flag can be used together with any of the previous Transformation flags for cvDFT Do inverse 1D or 2D transform. The result is not scaled. (Forward and Inverse are mutually exclusive, of course.) Scale the result: divide it by the number of array elements. Usually, it is combined with Inverse. Do forward or inverse transform of every individual row of the input matrix. This flag allows user to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself), to do 3D and higher-dimensional transforms etc. performs a forward transformation of 1D or 2D real array; the result, though being a complex array, has complex-conjugate symmetry (*CCS*, see the function description below for details), and such an array can be packed into a real array of the same size as input, which is the fastest option and which is what the function does by default; however, you may wish to get a full complex array (for simpler spectrum analysis, and so on) - pass the flag to enable the function to produce a full-size complex output array. performs an inverse transformation of a 1D or 2D complex array; the result is normally a complex array of the same size, however, if the input array has conjugate-complex symmetry (for example, it is a result of forward transformation with DFT_COMPLEX_OUTPUT flag), the output is a real array; while the function itself does not check whether the input is symmetrical or not, you can pass the flag and then the function will assume the symmetry and produce the real output array (note that when the input is packed into a real array and inverse transformation is executed, the function treats the input as a packed complex-conjugate symmetrical array, and the output will also be a real array). Distribution type for cvRandArr, etc. Uniform distribution Normal or Gaussian distribution Error status codes everithing is ok [CV_StsOk] pseudo error for back trace [CV_StsBackTrace] unknown /unspecified error [CV_StsError] internal error (bad state) [CV_StsInternal] insufficient memory [CV_StsNoMem] function arg/param is bad [CV_StsBadArg] unsupported function [CV_StsBadFunc] iter. didn't converge [CV_StsNoConv] tracing [CV_StsAutoTrace] image header is NULL [CV_HeaderIsNull] image size is invalid [CV_BadImageSize] offset is invalid [CV_BadOffset] [CV_BadOffset] [CV_BadStep] [CV_BadModelOrChSeq] [CV_BadNumChannels] [CV_BadNumChannel1U] [CV_BadDepth] [CV_BadAlphaChannel] [CV_BadOrder] [CV_BadOrigin] [CV_BadAlign] [CV_BadCallBack] [CV_BadTileSize] [CV_BadCOI] [CV_BadROISize] [CV_MaskIsTiled] null pointer [CV_StsNullPtr] incorrect vector length [CV_StsVecLengthErr] incorr. filter structure content [CV_StsFilterStructContentErr] incorr. transform kernel content [CV_StsKernelStructContentErr] incorrect filter ofset value [CV_StsFilterOffsetErr] the input/output structure size is incorrect [CV_StsBadSize] division by zero [CV_StsDivByZero] in-place operation is not supported [CV_StsInplaceNotSupported] request can't be completed [CV_StsObjectNotFound] formats of input/output arrays differ [CV_StsUnmatchedFormats] flag is wrong or not supported [CV_StsBadFlag] bad CvPoint [CV_StsBadPoint] bad format of mask (neither 8uC1 nor 8sC1) [CV_StsBadMask] sizes of input/output structures do not match [CV_StsUnmatchedSizes] the data format/type is not supported by the function [CV_StsUnsupportedFormat] some of parameters are out of range [CV_StsOutOfRange] invalid syntax/structure of the parsed file [CV_StsParseError] the requested function/feature is not implemented [CV_StsNotImplemented] an allocated block has been corrupted [CV_StsBadMemBlock] assertion failed Output string format of Mat.Dump() Default format. [1, 2, 3, 4, 5, 6; \n 7, 8, 9, ... ] CSV format. 1, 2, 3, 4, 5, 6\n 7, 8, 9, ... Python format. [[[1, 2, 3], [4, 5, 6]], \n [[7, 8, 9], ... ] NumPy format. array([[[1, 2, 3], [4, 5, 6]], \n [[7, 8, 9], .... ]]], type='uint8'); C language format. {1, 2, 3, 4, 5, 6, \n 7, 8, 9, ...}; The operation flags for cv::GEMM Transpose src1 Transpose src2 Transpose src3 Font name identifier. Only a subset of Hershey fonts (http://sources.isc.org/utils/misc/hershey-font.txt) are supported now. normal size sans-serif font small size sans-serif font normal size sans-serif font (more complex than HERSHEY_SIMPLEX) normal size serif font normal size serif font (more complex than HERSHEY_COMPLEX) smaller version of HERSHEY_COMPLEX hand-writing style font more complex variant of HERSHEY_SCRIPT_SIMPLEX flag for italic font Miscellaneous flags for cv::kmeans Select random initial centers in each attempt. Use kmeans++ center initialization by Arthur and Vassilvitskii [Arthur2007]. During the first (and possibly the only) attempt, use the user-supplied labels instead of computing them from the initial centers. For the second and further attempts, use the random or semi-random centers. Use one of KMEANS_\*_CENTERS flag to specify the exact method. diagonal type a diagonal from the upper half [< 0] Main dialonal [= 0] a diagonal from the lower half [> 0] Type of norm The L1-norm (sum of absolute values) of the array is normalized. The (Euclidean) L2-norm of the array is normalized. The array values are scaled and shifted to the specified range. The dimension index along which the matrix is reduce. The matrix is reduced to a single row. [= 0] The matrix is reduced to a single column. [= 1] The dimension is chosen automatically by analysing the dst size. [= -1] The reduction operations for cvReduce The output is the sum of all the matrix rows/columns. The output is the mean vector of all the matrix rows/columns. The output is the maximum (column/row-wise) of all the matrix rows/columns. The output is the minimum (column/row-wise) of all the matrix rows/columns. return codes for cv::solveLP() function problem is unbounded (target function can achieve arbitrary high values) problem is unfeasible (there are no points that satisfy all the constraints imposed) there is only one maximum for target function there are multiple maxima for target function - the arbitrary one is returned Signals an error and raises the exception. each matrix row is sorted independently each matrix column is sorted independently; this flag and the previous one are mutually exclusive. each matrix row is sorted in the ascending order. each matrix row is sorted in the descending order; this flag and the previous one are also mutually exclusive. File Storage Node class The default constructor Initializes from cv::FileNode* Releases unmanaged resources Returns the node content as an integer. If the node stores floating-point number, it is rounded. Returns the node content as float Returns the node content as double Returns the node content as text string Returns the node content as OpenCV Mat returns element of a mapping node returns element of a sequence node Returns true if the node is empty Returns true if the node is a "none" object Returns true if the node is a sequence Returns true if the node is a mapping Returns true if the node is an integer Returns true if the node is a floating-point number Returns true if the node is a text string Returns true if the node has a name Returns the node name or an empty string if the node is nameless Returns the number of elements in the node, if it is a sequence or mapping, or 1 otherwise. Returns type of the node. Type of the node. returns iterator pointing to the first node element returns iterator pointing to the element following the last node element Reads node elements to the buffer with the specified format Writes a comment. The function writes a comment into file storage.The comments are skipped when the storage is read. The written comment, single-line or multi-line If true, the function tries to put the comment at the end of current line. Else if the comment is multi-line, or if it does not fit at the end of the current line, the comment starts a new line. type of the file storage node empty node an integer floating-point number synonym or REAL text string in UTF-8 encoding synonym for STR sequence mapping compact representation of a sequence or mapping. Used only by YAML writer if set, means that all the collection elements are numbers of the same type (real's or int's). UNIFORM is used only when reading FileStorage; FLOW is used only when writing. So they share the same bit empty structure (sequence or mapping) the node has a name (i.e. it is element of a mapping) File Storage Node class The default constructor Initializes from cv::FileNode* Releases unmanaged resources Reads node elements to the buffer with the specified format. Usually it is more convenient to use operator `>>` instead of this method. Specification of each array element.See @ref format_spec "format specification" Pointer to the destination array. Number of elements to read. If it is greater than number of remaining elements then all of them will be read. *iterator iterator++ iterator += ofs Reads node elements to the buffer with the specified format. Usually it is more convenient to use operator `>>` instead of this method. Specification of each array element.See @ref format_spec "format specification" Pointer to the destination array. Number of elements to read. If it is greater than number of remaining elements then all of them will be read. XML/YAML File Storage Class. Default constructor. You should call FileStorage::open() after initialization. The full constructor Name of the file to open or the text string to read the data from. Extension of the file (.xml or .yml/.yaml) determines its format (XML or YAML respectively). Also you can append .gz to work with compressed files, for example myHugeMatrix.xml.gz. If both FileStorage::WRITE and FileStorage::MEMORY flags are specified, source is used just to specify the output file format (e.g. mydata.xml, .yml etc.). Encoding of the file. Note that UTF-16 XML encoding is not supported currently and you should use 8-bit encoding instead of it. Releases unmanaged resources Returns the specified element of the top-level mapping the currently written element the writer state operator that performs PCA. The previously stored data, if any, is released Encoding of the file. Note that UTF-16 XML encoding is not supported currently and you should use 8-bit encoding instead of it. Returns true if the object is associated with currently opened file. Closes the file and releases all the memory buffers Closes the file, releases all the memory buffers and returns the text string Returns the first element of the top-level mapping Returns the top-level mapping. YAML supports multiple streams Writes one or more numbers of the specified format to the currently written structure Returns the normalized object name for the specified file name Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. /Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. Writes data to a file storage. File storage mode The storage is open for reading The storage is open for writing The storage is open for appending flag, read data from source or write data to the internal buffer (which is returned by FileStorage::release) mask for format flags flag, auto format flag, XML format flag, YAML format Proxy datatype for passing Mat's and vector<>'s as input parameters Releases unmanaged resources Creates a proxy class of the specified Mat Creates a proxy class of the specified MatExpr Creates a proxy class of the specified Scalar Creates a proxy class of the specified double Creates a proxy class of the specified GpuMat Creates a proxy class of the specified array of Mat Creates a proxy class of the specified list Array object Creates a proxy class of the specified list Array object Matrix depth and channels for converting array to cv::Mat Creates a proxy class of the specified list Array object Creates a proxy class of the specified list Array object Matrix depth and channels for converting array to cv::Mat Creates a proxy class of the specified list Array object Creates a proxy class of the specified list Array object Matrix depth and channels for converting array to cv::Mat Proxy datatype for passing Mat's and vector<>'s as input parameters. Synonym for OutputArray. Linear Discriminant Analysis constructor Initializes and performs a Discriminant Analysis with Fisher's Optimization Criterion on given data in src and corresponding labels in labels.If 0 (or less) number of components are given, they are automatically determined for given data in computation. Releases unmanaged resources Returns the eigenvectors of this LDA. Returns the eigenvalues of this LDA. Serializes this object to a given filename. Deserializes this object from a given filename. Serializes this object to a given cv::FileStorage. Deserializes this object from a given cv::FileStorage. Compute the discriminants for data in src (row aligned) and labels. Projects samples into the LDA subspace. src may be one or more row aligned samples. Reconstructs projections from the LDA subspace. src may be one or more row aligned projections. Matrix expression Releases unmanaged resources Computes absolute value of each matrix element Creates a matrix header for the specified matrix row/column. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified row/column span. Extracts a rectangular submatrix. Array of selected ranges along each array dimension. Creates a matrix header for the specified matrix row/column. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified row/column span. Sets a matrix header for the specified matrix row/column. Sets a matrix header for the specified matrix row/column span. Sets a matrix header for the specified matrix row/column span. Creates a matrix header for the specified matrix row/column. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified matrix row/column. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified row/column span. Sets a matrix header for the specified matrix row/column. Sets a matrix header for the specified matrix row/column span. Sets a matrix header for the specified matrix row/column span. Creates/Sets a matrix header for the specified matrix row/column. Creates/Sets a matrix header for the specified row/column span. Creates/Sets a matrix header for the specified row/column span. Creates a matrix header for the specified matrix row/column. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified row/column span. Creates/Sets a matrix header for the specified matrix row/column. Creates/Sets a matrix header for the specified row/column span. Creates/Sets a matrix header for the specified row/column span. OpenCV C++ n-dimensional dense array class (cv::Mat) Creates from native cv::Mat* pointer Creates empty Mat Loads an image from a file. (cv::imread) Name of file to be loaded. Specifies color type of the loaded image constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType.CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or CV_8UC(n), ..., CV_64FC(n) to create multi-channel (up to CV_CN_MAX channels) matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Releases the resources Releases unmanaged resources Creates the Mat instance from System.IO.Stream Creates the Mat instance from image data (using cv::decode) Creates the Mat instance from image data (using cv::decode) sizeof(cv::Mat) Extracts a diagonal from a matrix, or creates a diagonal matrix. Returns an identity matrix of the specified size and type. Alternative to the matrix size specification Size(cols, rows) . Created matrix type. Returns an identity matrix of the specified size and type. Number of rows. Number of columns. Created matrix type. Returns an array of all 1’s of the specified size and type. Number of rows. Number of columns. Created matrix type. Returns an array of all 1’s of the specified size and type. Alternative to the matrix size specification Size(cols, rows) . Created matrix type. Returns an array of all 1’s of the specified size and type. Created matrix type. Array of integers specifying the array shape. Returns a zero array of the specified size and type. Number of rows. Number of columns. Created matrix type. Returns a zero array of the specified size and type. Alternative to the matrix size specification Size(cols, rows) . Created matrix type. Returns a zero array of the specified size and type. Created matrix type. operator < operator < operator <= operator <= operator == operator == operator != operator != operator > operator > operator >= operator >= Extracts a rectangular submatrix. Start row of the extracted submatrix. The upper boundary is not included. End row of the extracted submatrix. The upper boundary is not included. Start column of the extracted submatrix. The upper boundary is not included. End column of the extracted submatrix. The upper boundary is not included. Extracts a rectangular submatrix. Start and end row of the extracted submatrix. The upper boundary is not included. To select all the rows, use Range.All(). Start and end column of the extracted submatrix. The upper boundary is not included. To select all the columns, use Range.All(). Extracts a rectangular submatrix. Extracted submatrix specified as a rectangle. Extracts a rectangular submatrix. Array of selected ranges along each array dimension. Extracts a rectangular submatrix. Start row of the extracted submatrix. The upper boundary is not included. End row of the extracted submatrix. The upper boundary is not included. Start column of the extracted submatrix. The upper boundary is not included. End column of the extracted submatrix. The upper boundary is not included. Extracts a rectangular submatrix. Start and end row of the extracted submatrix. The upper boundary is not included. To select all the rows, use Range.All(). Start and end column of the extracted submatrix. The upper boundary is not included. To select all the columns, use Range.All(). Extracts a rectangular submatrix. Extracted submatrix specified as a rectangle. Extracts a rectangular submatrix. Array of selected ranges along each array dimension. Indexer to access partial Mat as MatExpr Mat column's indexer object Creates a matrix header for the specified matrix column. A 0-based column index. Creates a matrix header for the specified column span. An inclusive 0-based start index of the column span. An exclusive 0-based ending index of the column span. Indexer to access Mat column as MatExpr Mat row's indexer object Creates a matrix header for the specified matrix row. [Mat::row] A 0-based row index. Creates a matrix header for the specified row span. (Mat::rowRange) An inclusive 0-based start index of the row span. An exclusive 0-based ending index of the row span. Indexer to access Mat row as MatExpr Adjusts a submatrix size and position within the parent matrix. Shift of the top submatrix boundary upwards. Shift of the bottom submatrix boundary downwards. Shift of the left submatrix boundary to the left. Shift of the right submatrix boundary to the right. Provides a functional form of convertTo. Destination array. Desired destination array depth (or -1 if it should be the same as the source type). Provides a functional form of convertTo. Destination array. Returns the number of matrix channels. Creates a full copy of the matrix. Returns the partial Mat of the specified Mat the number of columns or -1 when the array has more than 2 dimensions the number of columns or -1 when the array has more than 2 dimensions the array dimensionality, >= 2 Converts an array to another data type with optional scaling. output matrix; if it does not have a proper size or type before the operation, it is reallocated. desired output matrix type or, rather, the depth since the number of channels are the same as the input has; if rtype is negative, the output matrix will have the same type as the input. optional scale factor. optional delta added to the scaled values. Copies the matrix to another one. Destination matrix. If it does not have a proper size or type before the operation, it is reallocated. Copies the matrix to another one. Destination matrix. If it does not have a proper size or type before the operation, it is reallocated. Operation mask. Its non-zero elements indicate which matrix elements need to be copied. Allocates new array data if needed. New number of rows. New number of columns. New matrix type. Allocates new array data if needed. Alternative new matrix size specification: Size(cols, rows) New matrix type. Allocates new array data if needed. Array of integers specifying a new array shape. New matrix type. Computes a cross-product of two 3-element vectors. Another cross-product operand. pointer to the data unsafe pointer to the data The pointer that is possible to compute a relative sub-array position in the main container array using locateROI() The pointer that is possible to compute a relative sub-array position in the main container array using locateROI() The pointer that is possible to compute a relative sub-array position in the main container array using locateROI() Returns the depth of a matrix element. Single-column matrix that forms a diagonal matrix or index of the diagonal, with the following values: Single-column matrix that forms a diagonal matrix or index of the diagonal, with the following values: Computes a dot-product of two vectors. another dot-product operand. Returns the matrix element size in bytes. Returns the size of each matrix element channel in bytes. Returns true if the array has no elements. Inverses a matrix. Matrix inversion method Reports whether the matrix is continuous or not. Returns whether this matrix is a part of other matrix or not. Locates the matrix header within a parent matrix. Output parameter that contains the size of the whole matrix containing *this as a part. Output parameter that contains an offset of *this inside the whole matrix. Performs an element-wise multiplication or division of the two matrices. Changes the shape and/or the number of channels of a 2D matrix without copying the data. New number of channels. If the parameter is 0, the number of channels remains the same. New number of rows. If the parameter is 0, the number of rows remains the same. Changes the shape and/or the number of channels of a 2D matrix without copying the data. New number of channels. If the parameter is 0, the number of channels remains the same. New number of rows. If the parameter is 0, the number of rows remains the same. the number of rows or -1 when the array has more than 2 dimensions the number of rows or -1 when the array has more than 2 dimensions Sets all or some of the array elements to the specified value. Sets all or some of the array elements to the specified value. Returns a matrix size. Returns a matrix size. Returns a normalized step. Returns a normalized step. Transposes a matrix. Returns the total number of array elements. Returns the type of a matrix element. Returns a string that represents this Mat. Returns a string that represents each element value of Mat. This method corresponds to std::ostream << Mat Makes a Mat that have the same size, depth and channels as this image Returns a pointer to the specified matrix row. Index along the dimension 0 Returns a pointer to the specified matrix element. Index along the dimension 0 Index along the dimension 1 Returns a pointer to the specified matrix element. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 Returns a pointer to the specified matrix element. Array of Mat::dims indices. Mat Indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Returns a value to the specified array element. Index along the dimension 0 A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. Returns a value to the specified array element. Array of Mat::dims indices. A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. Returns a value to the specified array element. Array of Mat::dims indices. A value to the specified array element. Set a value to the specified array element. Index along the dimension 0 Set a value to the specified array element. Index along the dimension 0 Index along the dimension 1 Set a value to the specified array element. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 Set a value to the specified array element. Array of Mat::dims indices. Mat column's indexer object Creates a matrix header for the specified matrix column. A 0-based column index. Creates a matrix header for the specified column span. An inclusive 0-based start index of the column span. An exclusive 0-based ending index of the column span. Indexer to access Mat column as Mat Mat row's indexer object Creates a matrix header for the specified matrix row. A 0-based row index. Creates a matrix header for the specified row span. An inclusive 0-based start index of the row span. An exclusive 0-based ending index of the row span. Indexer to access Mat row as Mat Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Get the data of this matrix as array Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix Set the specified array data to this matrix reserves enough space to fit sz hyper-planes resizes matrix to the specified number of hyper-planes resizes matrix to the specified number of hyper-planes; initializes the newly added elements Adds elements to the bottom of the matrix. (Mat.push_back) Added line(s) Adds elements to the bottom of the matrix. (Mat.push_back) Added line(s) removes several hyper-planes from bottom of the matrix (Mat.pop_back) Encodes an image into a memory buffer. Encodes an image into a memory buffer. Format-specific parameters. Encodes an image into a memory buffer. Encodes an image into a memory buffer. Format-specific parameters. Converts Mat to System.IO.MemoryStream Writes image data encoded from this Mat to System.IO.Stream Creates type-specific Mat instance from this. Computes absolute value of each matrix element Scales, computes absolute values and converts the result to 8-bit. The optional scale factor. [By default this is 1] The optional delta added to the scaled values. [By default this is 0] transforms array of numbers using a lookup table: dst(i)=lut(src(i)) Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array transforms array of numbers using a lookup table: dst(i)=lut(src(i)) Look-up table of 256 elements. In the case of multi-channel source array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the source array computes sum of array elements computes the number of nonzero array elements number of non-zero elements in mtx returns the list of locations of non-zero pixels computes mean value of selected array elements The optional operation mask computes mean value and standard deviation of all or selected array elements The output parameter: computed mean value The output parameter: computed standard deviation The optional operation mask computes norm of the selected array part Type of the norm The optional operation mask scales and shifts array elements so that either the specified norm (alpha) or the minimum (alpha) and maximum (beta) array values get the specified values The norm value to normalize to or the lower range boundary in the case of range normalization The upper range boundary in the case of range normalization; not used for norm normalization The normalization type When the parameter is negative, the destination array will have the same type as src, otherwise it will have the same number of channels as src and the depth =CV_MAT_DEPTH(rtype) The optional operation mask finds global minimum and maximum array elements and returns their values and their locations Pointer to returned minimum value Pointer to returned maximum value finds global minimum and maximum array elements and returns their values and their locations Pointer to returned minimum location Pointer to returned maximum location finds global minimum and maximum array elements and returns their values and their locations Pointer to returned minimum value Pointer to returned maximum value Pointer to returned minimum location Pointer to returned maximum location The optional mask used to select a sub-array finds global minimum and maximum array elements and returns their values and their locations Pointer to returned minimum value Pointer to returned maximum value finds global minimum and maximum array elements and returns their values and their locations finds global minimum and maximum array elements and returns their values and their locations Pointer to returned minimum value Pointer to returned maximum value transforms 2D matrix to 1D row or column vector by taking sum, minimum, maximum or mean value over all the rows The dimension index along which the matrix is reduced. 0 means that the matrix is reduced to a single row and 1 means that the matrix is reduced to a single column When it is negative, the destination vector will have the same type as the source matrix, otherwise, its type will be CV_MAKE_TYPE(CV_MAT_DEPTH(dtype), mtx.channels()) Copies each plane of a multi-channel array to a dedicated array The number of arrays must match mtx.channels() . The arrays themselves will be reallocated if needed extracts a single channel from src (coi is 0-based index) inserts a single channel to dst (coi is 0-based index) reverses the order of the rows, columns or both in a matrix Specifies how to flip the array: 0 means flipping around the x-axis, positive (e.g., 1) means flipping around y-axis, and negative (e.g., -1) means flipping around both axes. See also the discussion below for the formulas. The destination array; will have the same size and same type as src replicates the input matrix the specified number of times in the horizontal and/or vertical direction How many times the src is repeated along the vertical axis How many times the src is repeated along the horizontal axis set mask elements for those array elements which are within the element-specific bounding box (dst = lowerb <= src && src < upperb) The inclusive lower boundary array of the same size and type as src The exclusive upper boundary array of the same size and type as src The destination array, will have the same size as src and CV_8U type set mask elements for those array elements which are within the element-specific bounding box (dst = lowerb <= src && src < upperb) The inclusive lower boundary array of the same size and type as src The exclusive upper boundary array of the same size and type as src The destination array, will have the same size as src and CV_8U type computes square root of each matrix element (dst = src**0.5) The destination array; will have the same size and the same type as src raises the input matrix elements to the specified power (b = a**power) The exponent of power The destination array; will have the same size and the same type as src computes exponent of each matrix element (dst = e**src) The destination array; will have the same size and same type as src computes natural logarithm of absolute value of each matrix element: dst = log(abs(src)) The destination array; will have the same size and same type as src checks that each matrix element is within the specified range. The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception. checks that each matrix element is within the specified range. The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception. The optional output parameter, where the position of the first outlier is stored. The inclusive lower boundary of valid values range The exclusive upper boundary of valid values range converts NaN's to the given number multiplies matrix by its transposition from the left or from the right Specifies the multiplication ordering; see the description below The optional delta matrix, subtracted from src before the multiplication. When the matrix is empty ( delta=Mat() ), it’s assumed to be zero, i.e. nothing is subtracted, otherwise if it has the same size as src, then it’s simply subtracted, otherwise it is "repeated" to cover the full src and then subtracted. Type of the delta matrix, when it's not empty, must be the same as the type of created destination matrix, see the rtype description The optional scale factor for the matrix product When it’s negative, the destination matrix will have the same type as src . Otherwise, it will have type=CV_MAT_DEPTH(rtype), which should be either CV_32F or CV_64F transposes the matrix The destination array of the same type as src performs affine transformation of each element of multi-channel input matrix The transformation matrix The destination array; will have the same size and depth as src and as many channels as mtx.rows performs perspective transformation of each element of multi-channel input matrix 3x3 or 4x4 transformation matrix The destination array; it will have the same size and same type as src extends the symmetrical matrix from the lower half or from the upper half If true, the lower half is copied to the upper half, otherwise the upper half is copied to the lower half initializes scaled identity matrix (not necessarily square). The value to assign to the diagonal elements computes determinant of a square matrix. The input matrix must have CV_32FC1 or CV_64FC1 type and square size. determinant of the specified matrix. computes trace of a matrix sorts independently each matrix row or each matrix column The operation flags, a combination of the SortFlag values The destination array of the same size and the same type as src sorts independently each matrix row or each matrix column The operation flags, a combination of SortFlag values The destination integer array of the same size as src Performs a forward Discrete Fourier transform of 1D or 2D floating-point array. Transformation flags, a combination of the DftFlag2 values When the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT The destination array, which size and type depends on the flags Performs an inverse Discrete Fourier transform of 1D or 2D floating-point array. Transformation flags, a combination of the DftFlag2 values When the parameter != 0, the function assumes that only the first nonzeroRows rows of the input array ( DFT_INVERSE is not set) or only the first nonzeroRows of the output array ( DFT_INVERSE is set) contain non-zeros, thus the function can handle the rest of the rows more efficiently and thus save some time. This technique is very useful for computing array cross-correlation or convolution using DFT The destination array, which size and type depends on the flags performs forward or inverse 1D or 2D Discrete Cosine Transformation Transformation flags, a combination of DctFlag2 values The destination array; will have the same size and same type as src performs inverse 1D or 2D Discrete Cosine Transformation Transformation flags, a combination of DctFlag2 values The destination array; will have the same size and same type as src fills array with uniformly-distributed random numbers from the range [low, high) The inclusive lower boundary of the generated random numbers The exclusive upper boundary of the generated random numbers fills array with uniformly-distributed random numbers from the range [low, high) The inclusive lower boundary of the generated random numbers The exclusive upper boundary of the generated random numbers fills array with normally-distributed random numbers with the specified mean and the standard deviation The mean value (expectation) of the generated random numbers The standard deviation of the generated random numbers fills array with normally-distributed random numbers with the specified mean and the standard deviation The mean value (expectation) of the generated random numbers The standard deviation of the generated random numbers shuffles the input array elements The scale factor that determines the number of random swap operations. The optional random number generator used for shuffling. If it is null, theRng() is used instead. The input/output numerical 1D array Draws a line segment connecting two points First point's x-coordinate of the line segment. First point's y-coordinate of the line segment. Second point's x-coordinate of the line segment. Second point's y-coordinate of the line segment. Line color. Line thickness. [By default this is 1] Type of the line. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws a line segment connecting two points First point of the line segment. Second point of the line segment. Line color. Line thickness. [By default this is 1] Type of the line. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws simple, thick or filled rectangle One of the rectangle vertices. Opposite rectangle vertex. Line color (RGB) or brightness (grayscale image). Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1] Type of the line, see cvLine description. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws simple, thick or filled rectangle Rectangle. Line color (RGB) or brightness (grayscale image). Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle. [By default this is 1] Type of the line, see cvLine description. [By default this is LineType.Link8] Number of fractional bits in the point coordinates. [By default this is 0] Draws a circle X-coordinate of the center of the circle. Y-coordinate of the center of the circle. Radius of the circle. Circle color. Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1] Type of the circle boundary. [By default this is LineType.Link8] Number of fractional bits in the center coordinates and radius value. [By default this is 0] Draws a circle Center of the circle. Radius of the circle. Circle color. Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn. [By default this is 1] Type of the circle boundary. [By default this is LineType.Link8] Number of fractional bits in the center coordinates and radius value. [By default this is 0] Draws simple or thick elliptic arc or fills ellipse sector Center of the ellipse. Length of the ellipse axes. Rotation angle. Starting angle of the elliptic arc. Ending angle of the elliptic arc. Ellipse color. Thickness of the ellipse arc. [By default this is 1] Type of the ellipse boundary. [By default this is LineType.Link8] Number of fractional bits in the center coordinates and axes' values. [By default this is 0] Draws simple or thick elliptic arc or fills ellipse sector The enclosing box of the ellipse drawn Ellipse color. Thickness of the ellipse boundary. [By default this is 1] Type of the ellipse boundary. [By default this is LineType.Link8] Fills a convex polygon. The polygon vertices Polygon color Type of the polygon boundaries The number of fractional bits in the vertex coordinates Fills the area bounded by one or more polygons Array of polygons, each represented as an array of points Polygon color Type of the polygon boundaries The number of fractional bits in the vertex coordinates draws one or more polygonal curves renders text string in the image Encodes an image into a memory buffer. Encodes an image into a memory buffer. Format-specific parameters. Encodes an image into a memory buffer. Encodes an image into a memory buffer. Format-specific parameters. Saves an image to a specified file. Saves an image to a specified file. Saves an image to a specified file. Saves an image to a specified file. Forms a border around the image Specify how much pixels in each direction from the source image rectangle one needs to extrapolate Specify how much pixels in each direction from the source image rectangle one needs to extrapolate Specify how much pixels in each direction from the source image rectangle one needs to extrapolate Specify how much pixels in each direction from the source image rectangle one needs to extrapolate The border type The border value if borderType == Constant Smoothes image using median filter. The source image must have 1-, 3- or 4-channel and its depth should be CV_8U , CV_16U or CV_32F. The aperture linear size. It must be odd and more than 1, i.e. 3, 5, 7 ... The destination array; will have the same size and the same type as src. Blurs an image using a Gaussian filter. The input image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F. Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma* . Gaussian kernel standard deviation in X direction. Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height, respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY. pixel extrapolation method Applies bilateral filter to the image The source image must be a 8-bit or floating-point, 1-channel or 3-channel image. The diameter of each pixel neighborhood, that is used during filtering. If it is non-positive, it's computed from sigmaSpace Filter sigma in the color space. Larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0 , it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace The destination image; will have the same size and the same type as src Smoothes image using box filter The smoothing kernel size The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center Indicates, whether the kernel is normalized by its area or not The border mode used to extrapolate pixels outside of the image The destination image; will have the same size and the same type as src Smoothes image using normalized box filter The smoothing kernel size The anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center The border mode used to extrapolate pixels outside of the image The destination image; will have the same size and the same type as src Convolves an image with the kernel The desired depth of the destination image. If it is negative, it will be the same as src.depth() Convolution kernel (or rather a correlation kernel), a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center The optional value added to the filtered pixels before storing them in dst The pixel extrapolation method The destination image. It will have the same size and the same number of channels as src Applies separable linear filter to an image The destination image depth The coefficients for filtering each row The coefficients for filtering each column The anchor position within the kernel; The default value (-1, 1) means that the anchor is at the kernel center The value added to the filtered results before storing them The pixel extrapolation method The destination image; will have the same size and the same number of channels as src Calculates the first, second, third or mixed image derivatives using an extended Sobel operator The destination image depth Order of the derivative x Order of the derivative y Size of the extended Sobel kernel, must be 1, 3, 5 or 7 The optional scale factor for the computed derivative values (by default, no scaling is applied The optional delta value, added to the results prior to storing them in dst The pixel extrapolation method The destination image; will have the same size and the same number of channels as src Calculates the first x- or y- image derivative using Scharr operator The destination image depth Order of the derivative x Order of the derivative y The optional scale factor for the computed derivative values (by default, no scaling is applie The optional delta value, added to the results prior to storing them in dst The pixel extrapolation method The destination image; will have the same size and the same number of channels as src Calculates the Laplacian of an image The desired depth of the destination image The aperture size used to compute the second-derivative filters The optional scale factor for the computed Laplacian values (by default, no scaling is applied The optional delta value, added to the results prior to storing them in dst The pixel extrapolation method Destination image; will have the same size and the same number of channels as src Finds edges in an image using Canny algorithm. The first threshold for the hysteresis procedure The second threshold for the hysteresis procedure Aperture size for the Sobel operator [By default this is ApertureSize.Size3] Indicates, whether the more accurate L2 norm should be used to compute the image gradient magnitude (true), or a faster default L1 norm is enough (false). [By default this is false] The output edge map. It will have the same size and the same type as image computes both eigenvalues and the eigenvectors of 2x2 derivative covariation matrix at each pixel. The output is stored as 6-channel matrix. computes another complex cornerness criteria at each pixel adjusts the corner locations with sub-pixel accuracy to maximize the certain cornerness criteria Initial coordinates of the input corners and refined coordinates provided for output. Half of the side length of the search window. Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size. Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration. Finds the strong enough corners where the cornerMinEigenVal() or cornerHarris() report the local maxima. Input matrix must be 8-bit or floating-point 32-bit, single-channel image. Maximum number of corners to return. If there are more corners than are found, the strongest of them is returned. Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue or the Harris function response (see cornerHarris() ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01, then all the corners with the quality measure less than 15 are rejected. Minimum possible Euclidean distance between the returned corners. Optional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected. Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. Parameter indicating whether to use a Harris detector Free parameter of the Harris detector. Output vector of detected corners. Finds lines in a binary image using standard Hough transform. The input matrix must be 8-bit, single-channel, binary source image. This image may be modified by the function. Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians The accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold ) For the multi-scale Hough transform it is the divisor for the distance resolution rho. [By default this is 0] For the multi-scale Hough transform it is the divisor for the distance resolution theta. [By default this is 0] The output vector of lines. Each line is represented by a two-element vector (rho, theta) . rho is the distance from the coordinate origin (0,0) (top-left corner of the image) and theta is the line rotation angle in radians Finds lines segments in a binary image using probabilistic Hough transform. Distance resolution of the accumulator in pixels Angle resolution of the accumulator in radians The accumulator threshold parameter. Only those lines are returned that get enough votes ( > threshold ) The minimum line length. Line segments shorter than that will be rejected. [By default this is 0] The maximum allowed gap between points on the same line to link them. [By default this is 0] The output lines. Each line is represented by a 4-element vector (x1, y1, x2, y2) Finds circles in a grayscale image using a Hough transform. The input matrix must be 8-bit, single-channel and grayscale. Currently, the only implemented method is HoughCirclesMethod.Gradient The inverse ratio of the accumulator resolution to the image resolution. Minimum distance between the centers of the detected circles. The first method-specific parameter. [By default this is 100] The second method-specific parameter. [By default this is 100] Minimum circle radius. [By default this is 0] Maximum circle radius. [By default this is 0] The output vector found circles. Each vector is encoded as 3-element floating-point vector (x, y, radius) Dilates an image by using a specific structuring element. The structuring element used for dilation. If element=new Mat() , a 3x3 rectangular structuring element is used Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center The number of times dilation is applied. [By default this is 1] The pixel extrapolation method. [By default this is BorderTypes.Constant] The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()] The destination image. It will have the same size and the same type as src Erodes an image by using a specific structuring element. The structuring element used for dilation. If element=new Mat(), a 3x3 rectangular structuring element is used Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center The number of times erosion is applied The pixel extrapolation method The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()] The destination image. It will have the same size and the same type as src Performs advanced morphological transformations Type of morphological operation Structuring element Position of the anchor within the element. The default value (-1, -1) means that the anchor is at the element center Number of times erosion and dilation are applied. [By default this is 1] The pixel extrapolation method. [By default this is BorderTypes.Constant] The border value in case of a constant border. The default value has a special meaning. [By default this is CvCpp.MorphologyDefaultBorderValue()] Destination image. It will have the same size and the same type as src Resizes an image. output image size; if it equals zero, it is computed as: dsize = Size(round(fx*src.cols), round(fy*src.rows)) Either dsize or both fx and fy must be non-zero. scale factor along the horizontal axis; when it equals 0, it is computed as: (double)dsize.width/src.cols scale factor along the vertical axis; when it equals 0, it is computed as: (double)dsize.height/src.rows interpolation method output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src. Applies an affine transformation to an image. output image that has the size dsize and the same type as src. 2x3 transformation matrix. size of the output image. combination of interpolation methods and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation (dst -> src) . pixel extrapolation method; when borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function. value used in case of a constant border; by default, it is 0. Applies a perspective transformation to an image. 3x3 transformation matrix. size of the output image. combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation (dst -> src). pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE). value used in case of a constant border; by default, it equals 0. output image that has the size dsize and the same type as src. Applies a generic geometrical transformation to an image. The first map of either (x,y) points or just x values having the type CV_16SC2, CV_32FC1, or CV_32FC2. The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively. Interpolation method. The method INTER_AREA is not supported by this function. Pixel extrapolation method. When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function. Value used in case of a constant border. By default, it is 0. Destination image. It has the same size as map1 and the same type as src Inverts an affine transformation. Output reverse affine transformation. Retrieves a pixel rectangle from an image with sub-pixel accuracy. Size of the extracted patch. Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image. Depth of the extracted pixels. By default, they have the same depth as src. Extracted patch that has the size patchSize and the same number of channels as src . Adds an image to the accumulator. Optional operation mask. Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. Adds the square of a source image to the accumulator. Optional operation mask. Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. Computes a Hanning window coefficients in two dimensions. The window size specifications Created array type Applies a fixed-level threshold to each array element. The input matrix must be single-channel, 8-bit or 32-bit floating point. threshold value. maximum value to use with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. thresholding type (see the details below). output array of the same size and type as src. Applies an adaptive threshold to an array. Source matrix must be 8-bit single-channel image. Non-zero value assigned to the pixels for which the condition is satisfied. See the details below. Adaptive thresholding algorithm to use, ADAPTIVE_THRESH_MEAN_C or ADAPTIVE_THRESH_GAUSSIAN_C . Thresholding type that must be either THRESH_BINARY or THRESH_BINARY_INV . Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. Constant subtracted from the mean or weighted mean (see the details below). Normally, it is positive but may be zero or negative as well. Destination image of the same size and the same type as src. Blurs an image and downsamples it. size of the output image; by default, it is computed as Size((src.cols+1)/2 Upsamples an image and then blurs it. size of the output image; by default, it is computed as Size(src.cols*2, (src.rows*2) corrects lens distortion for the given camera matrix and distortion coefficients Input camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Camera matrix of the distorted image. By default, it is the same as cameraMatrix but you may additionally scale and shift the result by using a different matrix. Output (corrected) image that has the same size and type as src . returns the default new camera matrix (by default it is the same as cameraMatrix unless centerPricipalPoint=true) Camera view image size in pixels. Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not. the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false), or the modified one (when centerPrincipalPoint=true). Computes the ideal point coordinates from the observed point coordinates. Input matrix is an observed point coordinates, 1xN or Nx1 2-channel (CV_32FC2 or CV_64FC2). Camera matrix Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is null, the zero distortion coefficients are assumed. Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity transformation is used. New camera matrix (3x3) or new projection matrix (3x4). P1 or P2 computed by stereoRectify() can be passed here. If the matrix is empty, the identity new camera matrix is used. Output ideal point coordinates after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates. Normalizes the grayscale image brightness and contrast by normalizing its histogram. The source matrix is 8-bit single channel image. The destination image; will have the same size and the same type as src Performs a marker-based image segmentation using the watershed algorithm. Input matrix is 8-bit 3-channel image. Input/output 32-bit single-channel image (map) of markers. It should have the same size as image. Performs initial step of meanshift segmentation of an image. The source matrix is 8-bit, 3-channel image. The spatial window radius. The color window radius. Maximum level of the pyramid for the segmentation. Termination criteria: when to stop meanshift iterations. The destination image of the same format and the same size as the source. Segments the image using GrabCut algorithm. The input is 8-bit 3-channel image. Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have Cv2.GC_BGD / Cv2.GC_FGD / Cv2.GC_PR_BGD / Cv2.GC_PR_FGD ROI containing a segmented object. The pixels outside of the ROI are marked as "obvious background". The parameter is only used when mode==GC_INIT_WITH_RECT. Temporary array for the background model. Do not modify it while you are processing the same image. Temporary arrays for the foreground model. Do not modify it while you are processing the same image. Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==GC_INIT_WITH_MASK or mode==GC_EVAL . Operation mode that could be one of GrabCutFlag value. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. Starting point. New value of the repainted domain pixels. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. Starting point. New value of the repainted domain pixels. Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. (For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap. Starting point. New value of the repainted domain pixels. Fills a connected component with the given color. Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set in the second variant of the function. See the details below. (For the second function only) Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller. The function uses and updates the mask, so you take responsibility of initializing the mask content. Flood-filling cannot go across non-zero pixels in the mask. For example, an edge detector output can be used as a mask to stop filling at edges. It is possible to use the same mask in multiple calls to the function to make sure the filled area does not overlap. Starting point. New value of the repainted domain pixels. Optional output parameter set by the function to the minimum bounding rectangle of the repainted domain. Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Converts image from one color space to another The color space conversion code The number of channels in the destination image; if the parameter is 0, the number of the channels will be derived automatically from src and the code The destination image; will have the same size and the same depth as src Calculates all of the moments up to the third order of a polygon or rasterized shape. The input is a raster image (single-channel, 8-bit or floating-point 2D array). If it is true, then all the non-zero image pixels are treated as 1’s Computes the proximity map for the raster template and the image where the template is searched for The input is Image where the search is running; should be 8-bit or 32-bit floating-point. Searched template; must be not greater than the source image and have the same data type Specifies the comparison method A map of comparison results; will be single-channel 32-bit floating-point. If image is WxH and templ is wxh then result will be (W-w+1) x (H-h+1). computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. destination labeled image 8 or 4 for 8-way or 4-way connectivity respectively The number of labels computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. destination labeled image 8 or 4 for 8-way or 4-way connectivity respectively output image label type. Currently CV_32S and CV_16U are supported. The number of labels computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. destination labeled rectangular array 8 or 4 for 8-way or 4-way connectivity respectively The number of labels computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. destination labeled image statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes floating point centroid (x,y) output for each label, including the background label 8 or 4 for 8-way or 4-way connectivity respectively computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. destination labeled image statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes floating point centroid (x,y) output for each label, including the background label 8 or 4 for 8-way or 4-way connectivity respectively output image label type. Currently CV_32S and CV_16U are supported. computes the connected components labeled image of boolean image. image with 4 or 8 way connectivity - returns N, the total number of labels [0, N-1] where 0 represents the background label. ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. 8 or 4 for 8-way or 4-way connectivity respectively Finds contours in a binary image. The source is an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies this image while extracting the contours. Detected contours. Each contour is stored as a vector of points. Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Finds contours in a binary image. The source is an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies this image while extracting the contours. Detected contours. Each contour is stored as a vector of points. Optional output vector, containing information about the image topology. It has as many elements as the number of contours. For each i-th contour contours[i], the members of the elements hierarchy[i] are set to 0-based indices in contours of the next and previous contours at the same hierarchical level, the first child contour and the parent contour, respectively. If for the contour i there are no next, previous, parent, or nested contours, the corresponding elements of hierarchy[i] will be negative. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Finds contours in a binary image. The source is an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies this image while extracting the contours. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Detected contours. Each contour is stored as a vector of points. Finds contours in a binary image. The source is an 8-bit single-channel image. Non-zero pixels are treated as 1’s. Zero pixels remain 0’s, so the image is treated as binary. The function modifies this image while extracting the contours. Contour retrieval mode Contour approximation method Optional offset by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context. Detected contours. Each contour is stored as a vector of points. Draws contours in the image All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours. Thickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn. Line connectivity. Optional information about hierarchy. It is only needed if you want to draw only some of the contours Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available. Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy) Draws contours in the image Destination image. All the input contours. Each contour is stored as a point vector. Parameter indicating a contour to draw. If it is negative, all the contours are drawn. Color of the contours. Thickness of lines the contours are drawn with. If it is negative (for example, thickness=CV_FILLED ), the contour interiors are drawn. Line connectivity. Optional information about hierarchy. It is only needed if you want to draw only some of the contours Maximal level for drawn contours. If it is 0, only the specified contour is drawn. If it is 1, the function draws the contour(s) and all the nested contours. If it is 2, the function draws the contours, all the nested contours, all the nested-to-nested contours, and so on. This parameter is only taken into account when there is hierarchy available. Optional contour shift parameter. Shift all the drawn contours by the specified offset = (dx, dy) Approximates contour or a curve using Douglas-Peucker algorithm. The input is the polygon or curve to approximate and it must be 1 x N or N x 1 matrix of type CV_32SC2 or CV_32FC2. Specifies the approximation accuracy. This is the maximum distance between the original curve and its approximation. The result of the approximation; The type should match the type of the input curve The result of the approximation; The type should match the type of the input curve Calculates a contour perimeter or a curve length. The input is 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Indicates, whether the curve is closed or not Calculates the up-right bounding rectangle of a point set. The input is 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Minimal up-right bounding rectangle for the specified point set. Calculates the contour area. The input is 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Finds the minimum area rotated rectangle enclosing a 2D point set. The input is 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. Finds the minimum area circle enclosing a 2D point set. The input is 2D point set, represented by CV_32SC2 or CV_32FC2 matrix. The output center of the circle The output radius of the circle Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is either a vector of points that form the hull (must have the same type as the input points), or a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set). Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of points that form the hull (must have the same type as the input points). Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of points that form the hull (must have the same type as the input points). Computes convex hull for a set of 2D points. The input 2D point set, represented by CV_32SC2 or CV_32FC2 matrix If true, the output convex hull will be oriented clockwise, otherwise it will be oriented counter-clockwise. Here, the usual screen coordinate system is assumed - the origin is at the top-left corner, x axis is oriented to the right, and y axis is oriented downwards. The output convex hull. It is a vector of 0-based point indices of the hull points in the original array (since the set of convex hull points is a subset of the original point set). Computes the contour convexity defects Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull. The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. Computes the contour convexity defects Convex hull obtained using convexHull() that should contain indices of the contour points that make the hull. The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0. Returns true if the contour is convex. Does not support contours with self-intersection Fits ellipse to the set of 2D points. Fits line to the set of 2D points using M-estimator algorithm. The input is vector of 2D points. Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Output line parameters. Fits line to the set of 3D points using M-estimator algorithm. The input is vector of 3D points. Distance used by the M-estimator Numerical parameter ( C ) for some types of distances. If it is 0, an optimal value is chosen. Sufficient accuracy for the radius (distance between the coordinate origin and the line). Sufficient accuracy for the angle. 0.01 would be a good default value for reps and aeps. Output line parameters. Checks if the point is inside the contour. Optionally computes the signed distance from the point to the contour boundary. Point tested against the contour. If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not. Positive (inside), negative (outside), or zero (on an edge) value. Computes the distance transform map Abstract definition of Mat indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Parent matrix object Step byte length for each dimension Constructor A matrix whose element is 8UC1 (cv::Mat_<uchar>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 8UC3 (cv::Mat_<cv::Vec3b>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::DMatch (cv::Mat_<cv::Vec4f>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 64FC1 (cv::Mat_<double>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Vec3d [CV_64FC3] (cv::Mat_<cv::Vec3d>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 32FC1 (cv::Mat_<float>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 32FC4 (cv::Mat_<cv::Vec4f>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 32FC6 (cv::Mat_<cv::Vec6f>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 32SC1 (cv::Mat_<int>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 32SC1 (cv::Mat_<int>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Point [CV_32SC2] (cv::Mat_<cv::Point>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Point [CV_64FC2] (cv::Mat_<cv::Point2d>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Point [CV_32FC2] (cv::Mat_<cv::Point2f>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Point3d [CV_64FC3] (cv::Mat_<cv::Point3d>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Point3f [CV_32FC3] (cv::Mat_<cv::Point3f>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Point3i [CV_32SC3] (cv::Mat_<cv::Point3i>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is cv::Rect [CV_32SC4] (cv::Mat_<cv::Rect>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) A matrix whose element is 16SC1 (cv::Mat_<short>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) Type-specific abstract matrix Element Type For return value type of re-defined Mat methods Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType.CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or CV_8UC(n), ..., CV_64FC(n) to create multi-channel (up to CV_CN_MAX channels) matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Gets type-specific indexer for accessing each element Gets read-only enumerator For non-generic IEnumerable Convert this mat to managed array Convert this mat to managed rectangular array Creates a full copy of the matrix. Changes the shape of channels of a 2D matrix without copying the data. New number of rows. If the parameter is 0, the number of rows remains the same. Changes the shape of a 2D matrix without copying the data. New number of rows. If the parameter is 0, the number of rows remains the same. Transposes a matrix. Extracts a rectangular submatrix. Start row of the extracted submatrix. The upper boundary is not included. End row of the extracted submatrix. The upper boundary is not included. Start column of the extracted submatrix. The upper boundary is not included. End column of the extracted submatrix. The upper boundary is not included. Extracts a rectangular submatrix. Start and end row of the extracted submatrix. The upper boundary is not included. To select all the rows, use Range.All(). Start and end column of the extracted submatrix. The upper boundary is not included. To select all the columns, use Range.All(). Extracts a rectangular submatrix. Extracted submatrix specified as a rectangle. Extracts a rectangular submatrix. Array of selected ranges along each array dimension. Extracts a rectangular submatrix. Start row of the extracted submatrix. The upper boundary is not included. End row of the extracted submatrix. The upper boundary is not included. Start column of the extracted submatrix. The upper boundary is not included. End column of the extracted submatrix. The upper boundary is not included. Extracts a rectangular submatrix. Start and end row of the extracted submatrix. The upper boundary is not included. To select all the rows, use Range.All(). Start and end column of the extracted submatrix. The upper boundary is not included. To select all the columns, use Range.All(). Extracts a rectangular submatrix. Extracted submatrix specified as a rectangle. Extracts a rectangular submatrix. Array of selected ranges along each array dimension. Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) Removes the first occurrence of a specific object from the ICollection<T>. The object to remove from the ICollection<T>. true if item was successfully removed from the ICollection<T> otherwise, false. This method also returns false if item is not found in the original ICollection<T>. Determines whether the ICollection<T> contains a specific value. The object to locate in the ICollection<T>. true if item is found in the ICollection<T> otherwise, false. Determines the index of a specific item in the list. The object to locate in the list. The index of value if found in the list; otherwise, -1. Removes all items from the ICollection<T>. Copies the elements of the ICollection<T> to an Array, starting at a particular Array index. The one-dimensional Array that is the destination of the elements copied from ICollection<T>. The Array must have zero-based indexing. The zero-based index in array at which copying begins. Returns the total number of matrix elements (Mat.total) Total number of list(Mat) elements Gets a value indicating whether the IList is read-only. A matrix whose element is 16UC1 (cv::Mat_<ushort>) Creates empty Mat Creates from native cv::Mat* pointer Initializes by Mat object Managed Mat object constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . In the Size() constructor, the number of rows and the number of columns go in the reverse order. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat::clone() . Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Array of selected ranges of m along each dimensionality. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. No data is copied by these constructors. Instead, the header pointing to m data or its sub-array is constructed and associated with it. The reference counter, if any, is incremented. So, when you modify the matrix formed using such a constructor, you also modify the corresponding elements of m . If you want to have an independent copy of the sub-array, use Mat.Clone() . Region of interest. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructor for matrix headers pointing to user-allocated data Array of integers specifying an n-dimensional array shape. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Array of ndims-1 steps in case of a multi-dimensional array (the last step is always set to the element size). If not specified, the matrix is assumed to be continuous. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. constructs n-dimensional matrix Array of integers specifying an n-dimensional array shape. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . Matrix indexer 1-dimensional indexer Index along the dimension 0 A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Initializes as M x N matrix and copys array data to this Source array data to be copied to this Initializes as N x 1 matrix and copys array data to this Source array data to be copied to this Convert this mat to managed array Convert this mat to managed rectangular array Adds elements to the bottom of the matrix. (Mat::push_back) Added element(s) Proxy datatype for passing Mat's and List<>'s as output parameters Releases unmanaged resources Creates a proxy class of the specified matrix Creates a proxy class of the specified matrix Creates a proxy class of the specified list Creates a proxy class of the specified list Proxy datatype for passing Mat's and List<>'s as output parameters Releases managed resources Proxy datatype for passing Mat's and List<>'s as output parameters Releases managed resources Principal Component Analysis Releases unmanaged resources eigenvalues of the covariation matrix eigenvalues of the covariation matrix mean value subtracted before the projection and added after the back projection operator that performs PCA. The previously stored data, if any, is released operator that performs PCA. The previously stored data, if any, is released projects vector from the original space to the principal components subspace projects vector from the original space to the principal components subspace reconstructs the original vector from the projection reconstructs the original vector from the projection Flags for PCA operations The vectors are stored as rows (i.e. all the components of a certain vector are stored continously) The vectors are stored as columns (i.e. values of a certain vector component are stored continuously) Use pre-computed average vector Random Number Generator. The class implements RNG using Multiply-with-Carry algorithm. operations.hpp updates the state and returns the next 32-bit unsigned integer random number returns a random integer sampled uniformly from [0, N). returns uniformly distributed integer random number from [a,b) range returns uniformly distributed floating-point random number from [a,b) range returns uniformly distributed double-precision floating-point random number from [a,b) range returns Gaussian random variate with mean zero. Mersenne Twister random number generator operations.hpp updates the state and returns the next 32-bit unsigned integer random number returns a random integer sampled uniformly from [0, N). returns uniformly distributed integer random number from [a,b) range returns uniformly distributed floating-point random number from [a,b) range returns uniformly distributed double-precision floating-point random number from [a,b) range Sparse matrix class. Creates from native cv::SparseMat* pointer Creates empty SparseMat constructs n-dimensional sparse matrix Array of integers specifying an n-dimensional array shape. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. converts old-style CvMat to the new matrix; the data is not copied by default cv::Mat object Releases the resources Releases unmanaged resources sizeof(cv::Mat) Assignment operator. This is O(1) operation, i.e. no data is copied Assignment operator. equivalent to the corresponding constructor. creates full copy of the matrix copies all the data to the destination matrix. All the previous content of m is erased. converts sparse matrix to dense matrix. multiplies all the matrix elements by the specified scale factor alpha and converts the results to the specified data type converts sparse matrix to dense n-dim matrix with optional type conversion and scaling. The output matrix data type. When it is =-1, the output array will have the same data type as (*this) The scale factor The optional delta added to the scaled values before the conversion not used now Reallocates sparse matrix. If the matrix already had the proper size and type, it is simply cleared with clear(), otherwise, the old matrix is released (using release()) and the new one is allocated. sets all the sparse matrix elements to 0, which means clearing the hash table. manually increments the reference counter to the header. returns the size of each element in bytes (not including the overhead - the space occupied by SparseMat::Node elements) returns elemSize()/channels() Returns the type of sparse matrix element. Returns the depth of sparse matrix element. Returns the matrix dimensionality Returns the number of sparse matrix channels. Returns the array of sizes, or null if the matrix is not allocated Returns the size of i-th matrix dimension (or 0) Computes the element hash value (1D case) Index along the dimension 0 Computes the element hash value (2D case) Index along the dimension 0 Index along the dimension 1 Computes the element hash value (3D case) Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 Computes the element hash value (nD case) Array of Mat::dims indices. Low-level element-access function. Index along the dimension 0 Create new element with 0 value if it does not exist in SparseMat. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Low-level element-access function. Index along the dimension 0 Index along the dimension 1 Create new element with 0 value if it does not exist in SparseMat. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Low-level element-access function. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 Create new element with 0 value if it does not exist in SparseMat. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Low-level element-access function. Array of Mat::dims indices. Create new element with 0 value if it does not exist in SparseMat. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, null. Index along the dimension 0 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, null. Index along the dimension 0 Index along the dimension 1 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, null. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, null. Array of Mat::dims indices. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, default(T). Index along the dimension 0 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, default(T). Index along the dimension 0 Index along the dimension 1 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, default(T). Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Return pthe specified sparse matrix element if it exists; otherwise, default(T). Array of Mat::dims indices. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Mat Indexer 1-dimensional indexer Index along the dimension 0 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Returns a value to the specified array element. Index along the dimension 0 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. Returns a value to the specified array element. Array of Mat::dims indices. If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. Set a value to the specified array element. Index along the dimension 0 Set a value to the specified array element. Index along the dimension 0 Index along the dimension 1 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Set a value to the specified array element. Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 If hashVal is not null, the element hash value is not computed but hashval is taken instead. Set a value to the specified array element. Array of Mat::dims indices. If hashVal is not null, the element hash value is not computed but hashval is taken instead. Returns a string that represents this Mat. Abstract definition of Mat indexer 1-dimensional indexer Index along the dimension 0 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. 3-dimensional indexer Index along the dimension 0 Index along the dimension 1 Index along the dimension 2 If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. n-dimensional indexer Array of Mat::dims indices. If hashVal is not null, the element hash value is not computed but hashval is taken instead. A value to the specified array element. Parent matrix object Constructor Struct for matching: query descriptor index, train descriptor index, train image index and distance between descriptors. query descriptor index train descriptor index train image index Compares by distance (less is beter) Compares by distance (less is beter) Data structure for salient point detectors Coordinate of the point Feature size Feature orientation in degrees (has negative value if the orientation is not defined/not computed) Feature strength (can be used to select only the most prominent key points) Scale-space octave in which the feature has been found; may correlate with the size Point class (can be used by feature classifiers or object detectors) Complete constructor Coordinate of the point Feature size Feature orientation in degrees (has negative value if the orientation is not defined/not computed) Feature strength (can be used to select only the most prominent key points) Scale-space octave in which the feature has been found; may correlate with the size Point class (can be used by feature classifiers or object detectors) Complete constructor X-coordinate of the point Y-coordinate of the point Feature size Feature orientation in degrees (has negative value if the orientation is not defined/not computed) Feature strength (can be used to select only the most prominent key points) Scale-space octave in which the feature has been found; may correlate with the size Point class (can be used by feature classifiers or object detectors) Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Matrix data type (depth and number of channels) Entity value type depth constants type depth constants type depth constants type depth constants type depth constants type depth constants type depth constants type depth constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants predefined type constants Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two Point objects. The result specifies whether the values of the X and Y properties of the two Point objects are equal. A Point to compare. A Point to compare. This operator returns true if the X and Y values of left and right are equal; otherwise, false. Compares two Point objects. The result specifies whether the values of the X or Y properties of the two Point objects are unequal. A Point to compare. A Point to compare. This operator returns true if the values of either the X properties or the Y properties of left and right differ; otherwise, false. Unary plus operator Unary minus operator Shifts point by a certain offset Shifts point by a certain offset Shifts point by a certain offset Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Returns the distance between the specified two points Returns the distance between the specified two points Calculates the dot product of two 2D vectors. Calculates the dot product of two 2D vectors. Calculates the cross product of two 2D vectors. Calculates the cross product of two 2D vectors. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the values of the X and Y properties of the two CvPoint objects are equal. A Point to compare. A Point to compare. This operator returns true if the X and Y values of left and right are equal; otherwise, false. Compares two CvPoint2D32f objects. The result specifies whether the values of the X or Y properties of the two CvPoint2D32f objects are unequal. A Point to compare. A Point to compare. This operator returns true if the values of either the X properties or the Y properties of left and right differ; otherwise, false. Unary plus operator Unary minus operator Shifts point by a certain offset Shifts point by a certain offset Shifts point by a certain offset Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Returns the distance between the specified two points Returns the distance between the specified two points Calculates the dot product of two 2D vectors. Calculates the dot product of two 2D vectors. Calculates the cross product of two 2D vectors. Calculates the cross product of two 2D vectors. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the values of the X and Y properties of the two CvPoint objects are equal. A Point to compare. A Point to compare. This operator returns true if the X and Y values of left and right are equal; otherwise, false. Compares two CvPoint2D32f objects. The result specifies whether the values of the X or Y properties of the two CvPoint2D32f objects are unequal. A Point to compare. A Point to compare. This operator returns true if the values of either the X properties or the Y properties of left and right differ; otherwise, false. Unary plus operator Unary minus operator Shifts point by a certain offset Shifts point by a certain offset Shifts point by a certain offset Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Returns the distance between the specified two points Returns the distance between the specified two points Calculates the dot product of two 2D vectors. Calculates the dot product of two 2D vectors. Calculates the cross product of two 2D vectors. Calculates the cross product of two 2D vectors. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the values of the X and Y properties of the two CvPoint objects are equal. A Point to compare. A Point to compare. This operator returns true if the X and Y values of left and right are equal; otherwise, false. Compares two CvPoint2D32f objects. The result specifies whether the values of the X or Y properties of the two CvPoint2D32f objects are unequal. A Point to compare. A Point to compare. This operator returns true if the values of either the X properties or the Y properties of left and right differ; otherwise, false. Unary plus operator Unary minus operator Shifts point by a certain offset Shifts point by a certain offset Shifts point by a certain offset Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the values of the X and Y properties of the two CvPoint objects are equal. A Point to compare. A Point to compare. This operator returns true if the X and Y values of left and right are equal; otherwise, false. Compares two CvPoint2D32f objects. The result specifies whether the values of the X or Y properties of the two CvPoint2D32f objects are unequal. A Point to compare. A Point to compare. This operator returns true if the values of either the X properties or the Y properties of left and right differ; otherwise, false. Unary plus operator Unary minus operator Shifts point by a certain offset Shifts point by a certain offset Shifts point by a certain offset Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the values of the X and Y properties of the two CvPoint objects are equal. A Point to compare. A Point to compare. This operator returns true if the X and Y values of left and right are equal; otherwise, false. Compares two CvPoint2D32f objects. The result specifies whether the values of the X or Y properties of the two CvPoint2D32f objects are unequal. A Point to compare. A Point to compare. This operator returns true if the values of either the X properties or the Y properties of left and right differ; otherwise, false. Unary plus operator Unary minus operator Shifts point by a certain offset Shifts point by a certain offset Shifts point by a certain offset Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. float Range class Stores a set of four integers that represent the location and size of a rectangle sizeof(Rect) Represents a Rect structure with its properties left uninitialized. Initializes a new instance of the Rectangle class with the specified location and size. The x-coordinate of the upper-left corner of the rectangle. The y-coordinate of the upper-left corner of the rectangle. The width of the rectangle. The height of the rectangle. Initializes a new instance of the Rectangle class with the specified location and size. A Point that represents the upper-left corner of the rectangular region. A Size that represents the width and height of the rectangular region. Creates a Rectangle structure with the specified edge locations. The x-coordinate of the upper-left corner of this Rectangle structure. The y-coordinate of the upper-left corner of this Rectangle structure. The x-coordinate of the lower-right corner of this Rectangle structure. The y-coordinate of the lower-right corner of this Rectangle structure. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two Rect objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two Rect objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Shifts rectangle by a certain offset Shifts rectangle by a certain offset Expands or shrinks rectangle by a certain amount Expands or shrinks rectangle by a certain amount Determines the Rect structure that represents the intersection of two rectangles. A rectangle to intersect. A rectangle to intersect. Gets a Rect structure that contains the union of two Rect structures. A rectangle to union. A rectangle to union. Gets the y-coordinate of the top edge of this Rect structure. Gets the y-coordinate that is the sum of the Y and Height property values of this Rect structure. Gets the x-coordinate of the left edge of this Rect structure. Gets the x-coordinate that is the sum of X and Width property values of this Rect structure. Coordinate of the left-most rectangle corner [Point(X, Y)] Size of the rectangle [CvSize(Width, Height)] Coordinate of the left-most rectangle corner [Point(X, Y)] Coordinate of the right-most rectangle corner [Point(X+Width, Y+Height)] Determines if the specified point is contained within the rectangular region defined by this Rectangle. x-coordinate of the point y-coordinate of the point Determines if the specified point is contained within the rectangular region defined by this Rectangle. point Determines if the specified rectangle is contained within the rectangular region defined by this Rectangle. rectangle Inflates this Rect by the specified amount. The amount to inflate this Rectangle horizontally. The amount to inflate this Rectangle vertically. Inflates this Rect by the specified amount. The amount to inflate this rectangle. Creates and returns an inflated copy of the specified Rect structure. The Rectangle with which to start. This rectangle is not modified. The amount to inflate this Rectangle horizontally. The amount to inflate this Rectangle vertically. Determines the Rect structure that represents the intersection of two rectangles. A rectangle to intersect. A rectangle to intersect. Determines the Rect structure that represents the intersection of two rectangles. A rectangle to intersect. Determines if this rectangle intersects with rect. Rectangle Gets a Rect structure that contains the union of two Rect structures. A rectangle to union. Gets a Rect structure that contains the union of two Rect structures. A rectangle to union. A rectangle to union. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. sizeof(Rect) Represents a Rect2d structure with its properties left uninitialized. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two Rect2d objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two Rect2d objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Shifts rectangle by a certain offset Shifts rectangle by a certain offset Expands or shrinks rectangle by a certain amount Expands or shrinks rectangle by a certain amount Determines the Rect2d structure that represents the intersection of two rectangles. A rectangle to intersect. A rectangle to intersect. Gets a Rect2d structure that contains the union of two Rect2d structures. A rectangle to union. A rectangle to union. Gets the y-coordinate of the top edge of this Rect2d structure. Gets the y-coordinate that is the sum of the Y and Height property values of this Rect2d structure. Gets the x-coordinate of the left edge of this Rect2d structure. Gets the x-coordinate that is the sum of X and Width property values of this Rect2d structure. Coordinate of the left-most rectangle corner [Point2d(X, Y)] Size of the rectangle [CvSize(Width, Height)] Coordinate of the left-most rectangle corner [Point2d(X, Y)] Coordinate of the right-most rectangle corner [Point2d(X+Width, Y+Height)] Determines if the specified point is contained within the rectangular region defined by this Rectangle. x-coordinate of the point y-coordinate of the point Determines if the specified point is contained within the rectangular region defined by this Rectangle. point Determines if the specified rectangle is contained within the rectangular region defined by this Rectangle. rectangle Inflates this Rect by the specified amount. The amount to inflate this Rectangle horizontally. The amount to inflate this Rectangle vertically. Inflates this Rect by the specified amount. The amount to inflate this rectangle. Creates and returns an inflated copy of the specified Rect2d structure. The Rectangle with which to start. This rectangle is not modified. The amount to inflate this Rectangle horizontally. The amount to inflate this Rectangle vertically. Determines the Rect2d structure that represents the intersection of two rectangles. A rectangle to intersect. A rectangle to intersect. Determines the Rect2d structure that represents the intersection of two rectangles. A rectangle to intersect. Determines if this rectangle intersects with rect. Rectangle Gets a Rect2d structure that contains the union of two Rect2d structures. A rectangle to union. Gets a Rect2d structure that contains the union of two Rect2d structures. A rectangle to union. A rectangle to union. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. sizeof(Rect) Represents a Rect2f structure with its properties left uninitialized. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two Rectf objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two Rectf objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Shifts rectangle by a certain offset Shifts rectangle by a certain offset Expands or shrinks rectangle by a certain amount Expands or shrinks rectangle by a certain amount Determines the Rect2f structure that represents the intersection of two rectangles. A rectangle to intersect. A rectangle to intersect. Gets a Rect2f structure that contains the union of two Rect2f structures. A rectangle to union. A rectangle to union. Gets the y-coordinate of the top edge of this Rect2f structure. Gets the y-coordinate that is the sum of the Y and Height property values of this Rect2f structure. Gets the x-coordinate of the left edge of this Rect2f structure. Gets the x-coordinate that is the sum of X and Width property values of this Rect2f structure. Coordinate of the left-most rectangle corner [Point2f(X, Y)] Size of the rectangle [CvSize(Width, Height)] Coordinate of the left-most rectangle corner [Point2f(X, Y)] Coordinate of the right-most rectangle corner [Point2f(X+Width, Y+Height)] Determines if the specified point is contained within the rectangular region defined by this Rectangle. x-coordinate of the point y-coordinate of the point Determines if the specified point is contained within the rectangular region defined by this Rectangle. point Determines if the specified rectangle is contained within the rectangular region defined by this Rectangle. rectangle Inflates this Rect by the specified amount. The amount to inflate this Rectangle horizontally. The amount to inflate this Rectangle vertically. Inflates this Rect by the specified amount. The amount to inflate this rectangle. Creates and returns an inflated copy of the specified Rect2f structure. The Rectangle with which to start. This rectangle is not modified. The amount to inflate this Rectangle horizontally. The amount to inflate this Rectangle vertically. Determines the Rect2f structure that represents the intersection of two rectangles. A rectangle to intersect. A rectangle to intersect. Determines the Rect2f structure that represents the intersection of two rectangles. A rectangle to intersect. Determines if this rectangle intersects with rect. Rectangle Gets a Rect2f structure that contains the union of two Rect2f structures. A rectangle to union. Gets a Rect2f structure that contains the union of two Rect2f structures. A rectangle to union. A rectangle to union. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. sizeof(RotatedRect) the rectangle mass center width and height of the rectangle the rotation angle. When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle. returns 4 vertices of the rectangle returns the minimal up-right rectangle containing the rotated rectangle Template class for a 4-element vector derived from Vec. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. #F0F8FF #FAEBD7 #00FFFF #7FFFD4 #F0FFFF #F5F5DC #FFE4C4 #000000 #FFEBCD #0000FF #8A2BE2 #A52A2A #DEB887 #5F9EA0 #7FFF00 #D2691E #FF7F50 #6495ED #FFF8DC #DC143C #00FFFF #00008B #008B8B #B8860B #A9A9A9 #006400 #BDB76B #8B008B #556B2F #FF8C00 #9932CC #8B0000 #E9967A #8FBC8F #483D8B #2F4F4F #00CED1 #9400D3 #FF1493 #00BFFF #696969 #1E90FF #B22222 #FFFAF0 #228B22 #FF00FF #DCDCDC #F8F8FF #FFD700 #DAA520 #808080 #008000 #ADFF2F #F0FFF0 #FF69B4 #CD5C5C #4B0082 #FFFFF0 #F0E68C #E6E6FA #FFF0F5 #7CFC00 #FFFACD #ADD8E6 #F08080 #E0FFFF #FAFAD2 #D3D3D3 #90EE90 #FFB6C1 #FFA07A #20B2AA #87CEFA #778899 #B0C4DE #FFFFE0 #00FF00 #32CD32 #FAF0E6 #FF00FF #800000 #66CDAA #0000CD #BA55D3 #9370DB #3CB371 #7B68EE #00FA9A #48D1CC #C71585 #191970 #F5FFFA #FFE4E1 #FFE4B5 #FFDEAD #000080 #FDF5E6 #808000 #6B8E23 #FFA500 #FF4500 #DA70D6 #EEE8AA #98FB98 #AFEEEE #DB7093 #FFEFD5 #FFDAB9 #CD853F #FFC0CB #DDA0DD #B0E0E6 #800080 #FF0000 #BC8F8F #4169E1 #8B4513 #FA8072 #F4A460 #2E8B57 #FFF5EE #A0522D #C0C0C0 #87CEEB #6A5ACD #708090 #FFFAFA #00FF7F #4682B4 #D2B48C #008080 #D8BFD8 #FF6347 #40E0D0 #EE82EE #F5DEB3 #FFFFFF #F5F5F5 #FFFF00 #9ACD32 Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. sizeof(Size2f) Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. The class defining termination criteria for iterative algorithms. the type of termination criteria: COUNT, EPS or COUNT + EPS the maximum number of iterations/elements the desired accuracy full constructor full constructor with both type (count | epsilon) 2-Tuple of byte (System.Byte) The value of the first component of this object. The value of the second component of this object. Initializer Indexer 2-Tuple of double (System.Double) The value of the first component of this object. The value of the second component of this object. Initializer Indexer 2-Tuple of float (System.Single) The value of the first component of this object. The value of the second component of this object. Initializer Indexer 2-Tuple of int (System.Int32) The value of the first component of this object. The value of the second component of this object. Initializer Indexer 2-Tuple of short (System.Int16) The value of the first component of this object. The value of the second component of this object. Initializer Indexer 2-Tuple of ushort (System.UInt16) The value of the first component of this object. The value of the second component of this object. Initializer Indexer 3-Tuple of byte (System.Byte) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. Initializer Indexer 3-Tuple of double (System.Double) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. Initializer Indexer 3-Tuple of float (System.Single) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. Initializer Indexer 3-Tuple of int (System.Int32) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. Initializer Indexer 3-Tuple of short (System.Int16) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. Initializer Indexer 3-Tuple of ushort (System.UInt16) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. Initializer Indexer 4-Tuple of byte (System.Byte) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. Initializer Indexer 4-Tuple of double (System.Double) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. Initializer Indexer 4-Tuple of float (System.Single) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. Initializer Indexer 4-Tuple of int (System.Int32) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. Initializer Indexer 4-Tuple of short (System.Int16) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. Initializer Indexer 4-Tuple of ushort (System.UInt16) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. Initializer Indexer 6-Tuple of byte (System.Byte) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. The value of the fifth component of this object. The value of the sizth component of this object. Initializer Indexer 6-Tuple of double (System.Double) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. The value of the fifth component of this object. The value of the sixth component of this object. Initializer Indexer 6-Tuple of float (System.Single) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. The value of the fifth component of this object. The value of the sixth component of this object. Initializer Indexer 6-Tuple of int (System.Int32) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. The value of the fourth component of this object. The value of the sixth component of this object. Initializer Indexer 6-Tuple of short (System.Int16) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. The value of the fifth component of this object. The value of the sixth component of this object. Initializer Indexer 4-Tuple of ushort (System.UInt16) The value of the first component of this object. The value of the second component of this object. The value of the third component of this object. The value of the fourth component of this object. The value of the fifth component of this object. The value of the sixth component of this object. Initializer Indexer Singular Value Decomposition class the default constructor the constructor that performs SVD Releases unmanaged resources eigenvalues of the covariation matrix eigenvalues of the covariation matrix mean value subtracted before the projection and added after the back projection the operator that performs SVD. The previously allocated SVD::u, SVD::w are SVD::vt are released. performs back substitution, so that dst is the solution or pseudo-solution of m*dst = rhs, where m is the decomposed matrix decomposes matrix and stores the results to user-provided matrices computes singular values of a matrix performs back substitution finds dst = arg min_{|dst|=1} |m*dst| Operation flags for SVD enables modification of matrix src1 during the operation. It speeds up the processing. indicates that only a vector of singular values `w` is to be processed, while u and vt will be set to empty matrices when the matrix is not square, by default the algorithm produces u and vt matrices of sufficiently large size for the further A reconstruction; if, however, FULL_UV flag is specified, u and vt will be full-size square orthogonal matrices. TODO [HOGDescriptor::DESCR_FORMAT_ROW_BY_ROW] [HOGDescriptor::DESCR_FORMAT_COL_BY_COL] Gives information about the given GPU Creates DeviceInfo object for the current GPU Creates DeviceInfo object for the given GPU Releases unmanaged resources Return compute capability versions Return compute capability versions Checks whether device supports the given feature Checks whether the GPU module can be run on the given device An abstract class in GPU module that implements DisposableCvObject Default constructor Checks whether the opencv_gpu*.dll includes CUDA support. Smart pointer for GPU memory with reference counting. Its interface is mostly similar with cv::Mat. Creates from native cv::gpu::GpuMat* pointer Creates empty GpuMat constructs 2D matrix of the specified size and type Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constructor for matrix headers pointing to user-allocated data Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . constructs 2D matrix of the specified size and type 2D array size: Size(cols, rows) Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType.CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. constructor for matrix headers pointing to user-allocated data 2D array size: Size(cols, rows) Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize() . creates a matrix for other matrix Array that (as a whole) is assigned to the constructed matrix. creates a matrix for other matrix GpuMat that (as a whole) is assigned to the constructed matrix. constucts 2D matrix and fills it with the specified Scalar value. Number of rows in a 2D array. Number of columns in a 2D array. Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or MatType. CV_8UC(n), ..., CV_64FC(n) to create multi-channel matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . constucts 2D matrix and fills it with the specified Scalar value. 2D array size: Size(cols, rows) . Array type. Use MatType.CV_8UC1, ..., CV_64FC4 to create 1-4 channel matrices, or CV_8UC(n), ..., CV_64FC(n) to create multi-channel (up to CV_CN_MAX channels) matrices. An optional value to initialize each matrix element with. To set all the matrix elements to the particular value after the construction, use SetTo(Scalar s) method . creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix. Range of the m rows to take. As usual, the range start is inclusive and the range end is exclusive. Use Range.All to take all the rows. Range of the m columns to take. Use Range.All to take all the columns. creates a matrix header for a part of the bigger matrix Array that (as a whole or partly) is assigned to the constructed matrix.. Region of interest. Clean up any resources being used. Releases unmanaged resources converts header to GpuMat converts header to Mat includes several bit-fields: 1.the magic signature 2.continuity flag 3.depth 4.number of channels the number of rows the number of columns the number of rows the number of columns pointer to the data pointer to the reference counter; when matrix points to user-allocated data, the pointer is NULL helper fields used in locateROI and adjustROI helper fields used in locateROI and adjustROI Extracts a rectangular submatrix. Start row of the extracted submatrix. The upper boundary is not included. End row of the extracted submatrix. The upper boundary is not included. Start column of the extracted submatrix. The upper boundary is not included. End column of the extracted submatrix. The upper boundary is not included. GpuMat Indexer 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. Gets a type-specific indexer. The indexer has getters/setters to access each matrix element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 A value to the specified array element. Returns a value to the specified array element. Index along the dimension 0 Index along the dimension 1 A value to the specified array element. Set a value to the specified array element. Index along the dimension 0 Index along the dimension 1 returns a new matrix header for the specified column span returns a new matrix header for the specified column span Mat column's indexer object Creates a matrix header for the specified matrix column. A 0-based column index. Creates a matrix header for the specified column span. An inclusive 0-based start index of the column span. An exclusive 0-based ending index of the column span. Indexer to access GpuMat column returns a new matrix header for the specified row span returns a new matrix header for the specified row span Mat row's indexer object Creates a matrix header for the specified matrix column. A 0-based column index. Creates a matrix header for the specified column span. An inclusive 0-based start index of the column span. An exclusive 0-based ending index of the column span. Indexer to access GpuMat row returns true iff the GpuMatrix data is continuous (i.e. when there are no gaps between successive rows). similar to CV_IS_GpuMat_CONT(cvGpuMat->type) Returns the number of matrix channels. Returns the depth of a matrix element. Returns the matrix element size in bytes. Returns the size of each matrix element channel in bytes. Returns a matrix size. a distance between successive rows in bytes; includes the gap if any Returns a normalized step. Returns the type of a matrix element. returns true if GpuMatrix data is NULL Pefroms blocking upload data to GpuMat. Downloads data from device to host memory. Blocking calls. returns deep copy of the matrix, i.e. the data is copied copies those matrix elements to "m" copies those matrix elements to "m" that are marked with non-zero mask elements. converts matrix to another datatype with optional scalng. See cvConvertScale. sets some of the matrix elements to s, according to the mask creates alternative matrix header for the same data, with different number of channels and/or different number of rows. see cvReshape. allocates new matrix data unless the matrix already has specified size and type. previous data is unreferenced if needed. Number of rows in a 2D array. Number of columns in a 2D array. Array type. allocates new matrix data unless the matrix already has specified size and type. previous data is unreferenced if needed. 2D array size: Size(cols, rows) Array type. swaps with other smart pointer locates matrix header within a parent matrix. moves/resizes the current matrix ROI inside the parent matrix. returns pointer to y-th row Returns a string that represents this Mat. Abstract definition of Mat indexer 2-dimensional indexer Index along the dimension 0 Index along the dimension 1 A value to the specified array element. Parent matrix object Step byte length for each dimension sizeof(T) Constructor Creates/Sets a matrix header for the specified matrix row/column. Creates/Sets a matrix header for the specified row/column span. Creates/Sets a matrix header for the specified row/column span. Creates a matrix header for the specified matrix row/column. Creates a matrix header for the specified row/column span. Creates a matrix header for the specified row/column span. Creates/Sets a matrix header for the specified matrix row/column. Creates/Sets a matrix header for the specified row/column span. Creates/Sets a matrix header for the specified row/column span. Encapculates Cuda Stream. Provides interface for async coping. Creates from native cv::gpu::Stream* pointer Creates empty Stream Clean up any resources being used. Releases unmanaged resources Empty stream Downloads asynchronously. Warning! cv::Mat must point to page locked memory (i.e. to CudaMem data or to its subMat) Uploads asynchronously. Warning! cv::Mat must point to page locked memory (i.e. to CudaMem data or to its ROI) Copy asynchronously Memory set asynchronously Memory set asynchronously converts matrix type, ex from float to uchar depending on type Adds a callback to be called on the host after all currently enqueued items in the stream have completed Not supported cv::dnn functions Reads a network model stored in Darknet (https://pjreddie.com/darknet/) model files. path to the .cfg file with text description of the network architecture. path to the .weights file with learned network. Network object that ready to do forward, throw an exception in failure cases. This is shortcut consisting from DarknetImporter and Net::populateNet calls. Reads a network model stored in Caffe model files. This is shortcut consisting from createCaffeImporter and Net::populateNet calls. Reads a network model stored in Tensorflow model file. This is shortcut consisting from createTensorflowImporter and Net::populateNet calls. Reads a network model stored in Torch model file. This is shortcut consisting from createTorchImporter and Net::populateNet calls. Read deep learning network represented in one of the supported formats. This function automatically detects an origin framework of trained model and calls an appropriate function such @ref readNetFromCaffe, @ref readNetFromTensorflow, Binary file contains trained weights. The following file * extensions are expected for models from different frameworks: * * `*.caffemodel` (Caffe, http://caffe.berkeleyvision.org/) * * `*.pb` (TensorFlow, https://www.tensorflow.org/) * * `*.t7` | `*.net` (Torch, http://torch.ch/) * * `*.weights` (Darknet, https://pjreddie.com/darknet/) * * `*.bin` (DLDT, https://software.intel.com/openvino-toolkit) Text file contains network configuration. It could be a * file with the following extensions: * * `*.prototxt` (Caffe, http://caffe.berkeleyvision.org/) * * `*.pbtxt` (TensorFlow, https://www.tensorflow.org/) * * `*.cfg` (Darknet, https://pjreddie.com/darknet/) * * `*.xml` (DLDT, https://software.intel.com/openvino-toolkit) Explicit framework name tag to determine a format. Loads blob which was serialized as torch.Tensor object of Torch7 framework. This function has the same limitations as createTorchImporter(). Creates blob from .pb file. path to the .pb file with input tensor. Creates 4-dimensional blob from image. Optionally resizes and crops @p image from center, subtract @p mean values, scales values by @p scalefactor, swap Blue and Red channels. input image (with 1- or 3-channels). multiplier for @p image values. spatial size for output image scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if @p image has BGR ordering and @p swapRB is true. flag which indicates that swap first and last channels in 3-channel image is necessary. flag which indicates whether image will be cropped after resize or not 4-dimansional Mat with NCHW dimensions order. if @p crop is true, input image is resized so one side after resize is equal to corresponing dimension in @p size and another one is equal or larger.Then, crop from the center is performed. If @p crop is false, direct resize without cropping and preserving aspect ratio is performed. Creates 4-dimensional blob from series of images. Optionally resizes and crops @p images from center, subtract @p mean values, scales values by @p scalefactor, swap Blue and Red channels. input images (all with 1- or 3-channels). multiplier for @p image values. spatial size for output image scalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if @p image has BGR ordering and @p swapRB is true. flag which indicates that swap first and last channels in 3-channel image is necessary. flag which indicates whether image will be cropped after resize or not 4-dimansional Mat with NCHW dimensions order. if @p crop is true, input image is resized so one side after resize is equal to corresponing dimension in @p size and another one is equal or larger.Then, crop from the center is performed. If @p crop is false, direct resize without cropping and preserving aspect ratio is performed. Convert all weights of Caffe network to half precision floating point. Path to origin model from Caffe framework contains single precision floating point weights(usually has `.caffemodel` extension). Path to destination model with updated weights. Shrinked model has no origin float32 weights so it can't be used in origin Caffe framework anymore.However the structure of data is taken from NVidia's Caffe fork: https://github.com/NVIDIA/caffe. So the resulting model may be used there. Create a text representation for a binary network stored in protocol buffer format. A path to binary network. A path to output text file to be created. Performs non maximum suppression given boxes and corresponding scores. a set of bounding boxes to apply NMS. a set of corresponding confidences. a threshold used to filter boxes by score. a threshold used in non maximum suppression. the kept indices of bboxes after NMS. a coefficient in adaptive threshold formula if `>0`, keep at most @p top_k picked indices. Performs non maximum suppression given boxes and corresponding scores. a set of bounding boxes to apply NMS. a set of corresponding confidences. a threshold used to filter boxes by score. a threshold used in non maximum suppression. the kept indices of bboxes after NMS. a coefficient in adaptive threshold formula if `>0`, keep at most @p top_k picked indices. Performs non maximum suppression given boxes and corresponding scores. a set of bounding boxes to apply NMS. a set of corresponding confidences. a threshold used to filter boxes by score. a threshold used in non maximum suppression. the kept indices of bboxes after NMS. a coefficient in adaptive threshold formula if `>0`, keep at most @p top_k picked indices. Release a Myriad device is binded by OpenCV. Single Myriad device cannot be shared across multiple processes which uses Inference Engine's Myriad plugin. This class allows to create and manipulate comprehensive artificial neural networks. Neural network is presented as directed acyclic graph(DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs. Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id. This class supports reference counting of its instances, i.e.copies point to the same instance. Default constructor. Reads a network model stored in Darknet (https://pjreddie.com/darknet/) model files. path to the .cfg file with text description of the network architecture. path to the .weights file with learned network. Network object that ready to do forward, throw an exception in failure cases. This is shortcut consisting from DarknetImporter and Net::populateNet calls. Reads a network model stored in Caffe model files. This is shortcut consisting from createCaffeImporter and Net::populateNet calls. Reads a network model stored in Tensorflow model file. This is shortcut consisting from createTensorflowImporter and Net::populateNet calls. Reads a network model stored in Torch model file. This is shortcut consisting from createTorchImporter and Net::populateNet calls. Read deep learning network represented in one of the supported formats. This function automatically detects an origin framework of trained model and calls an appropriate function such @ref readNetFromCaffe, @ref readNetFromTensorflow, Binary file contains trained weights. The following file * extensions are expected for models from different frameworks: * * `*.caffemodel` (Caffe, http://caffe.berkeleyvision.org/) * * `*.pb` (TensorFlow, https://www.tensorflow.org/) * * `*.t7` | `*.net` (Torch, http://torch.ch/) * * `*.weights` (Darknet, https://pjreddie.com/darknet/) * * `*.bin` (DLDT, https://software.intel.com/openvino-toolkit) Text file contains network configuration. It could be a * file with the following extensions: * * `*.prototxt` (Caffe, http://caffe.berkeleyvision.org/) * * `*.pbtxt` (TensorFlow, https://www.tensorflow.org/) * * `*.cfg` (Darknet, https://pjreddie.com/darknet/) * * `*.xml` (DLDT, https://software.intel.com/openvino-toolkit) Explicit framework name tag to determine a format. Load a network from Intel's Model Optimizer intermediate representation. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend. XML configuration file with network's topology. Binary file with trained weights. Reads a network model ONNX https://onnx.ai/ path to the .onnx file with text description of the network architecture. Network object that ready to do forward, throw an exception in failure cases. Returns true if there are no layers in the network. Converts string name of the layer to the integer identifier. id of the layer, or -1 if the layer wasn't found. Connects output of the first layer to input of the second layer. descriptor of the first layer output. descriptor of the second layer input. Connects #@p outNum output of the first layer to #@p inNum input of the second layer. identifier of the first layer identifier of the second layer number of the first layer output number of the second layer input Sets outputs names of the network input pseudo layer. * Each net always has special own the network input pseudo layer with id=0. * This layer stores the user blobs only and don't make any computations. * In fact, this layer provides the only way to pass user data into the network. * As any other layer, this layer can label its outputs and this function provides an easy way to do this. Runs forward pass to compute output of layer with name @p outputName. By default runs forward pass for the whole network. name for layer which output is needed to get blob for first output of specified layer. Runs forward pass to compute output of layer with name @p outputName. contains all output blobs for specified layer. name for layer which output is needed to get. If outputName is empty, runs forward pass for the whole network. Runs forward pass to compute outputs of layers listed in @p outBlobNames. contains blobs for first outputs of specified layers. names for layers which outputs are needed to get Compile Halide layers. Schedule layers that support Halide backend. Then compile them for specific target.For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied. Path to YAML file with scheduling directives. Ask network to use specific computation backend where it supported. backend identifier. Ask network to make computations on specific target device. target identifier. Sets the new value for the layer output blob new blob. descriptor of the updating layer output blob. connect(String, String) to know format of the descriptor. If updating blob is not empty then @p blob must have the same shape, because network reshaping is not implemented yet. Returns indexes of layers with unconnected outputs. Returns names of layers with unconnected outputs. Enables or disables layer fusion in the network. true to enable the fusion, false to disable. The fusion is enabled by default. Returns overall time for inference and timings (in ticks) for layers. Indexes in returned vector correspond to layers ids.Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. vector for tick timings for all layers. overall ticks for model inference. Abstract base class for all facemark models. All facemark models in OpenCV are derived from the abstract base class Facemark, which provides a unified access to all facemark algorithms in OpenCV. To utilize this API in your program, please take a look at the @ref tutorial_table_of_content_facemark A function to load the trained model before the fitting process. A string represent the filename of a trained model. Trains a Facemark algorithm using the given dataset. Input image. Output of the function which represent region of interest of the detected faces. Each face is stored in cv::Rect container. The detected landmark points for each faces. Get data from an algorithm The obtained data, algorithm dependent. Releases managed resources Constructor Releases managed resources filename of the model show the training print-out flag to save the trained model or not Releases managed resources Constructor Releases managed resources offset for the loaded face landmark points filename of the face detector model show the training print-out number of landmark points multiplier for augment the training data number of refinement stages number of tree in the model for each landmark point refinement the depth of decision tree, defines the size of feature overlap ratio for training the LBF feature filename where the trained model will be saved flag to save the trained model or not seed for shuffling the training data index of facemark points on pupils of left and right eye index of facemark points on pupils of left and right eye base for two FaceRecognizer classes Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Releases managed resources Training and prediction must be done on grayscale images, use cvtColor to convert between the color spaces. - **THE EIGENFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE. ** (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown.Use resize to resize the images. - This model does not support updating. Releases managed resources Training and prediction must be done on grayscale images, use cvtColor to convert between the color spaces. - **THE EIGENFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE. ** (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown.Use resize to resize the images. - This model does not support updating. The number of components (read: Eigenfaces) kept for this Principal Component Analysis. As a hint: There's no rule how many components (read: Eigenfaces) should be kept for good reconstruction capabilities. It is based on your input data, so experiment with the number. Keeping 80 components should almost always be sufficient. The threshold applied in the prediction. Abstract base class for all face recognition models. All face recognition models in OpenCV are derived from the abstract base class FaceRecognizer, which provides a unified access to all face recongition algorithms in OpenCV. Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Releases managed resources Trains a FaceRecognizer with given data and associated labels. Updates a FaceRecognizer with given data and associated labels. Gets a prediction from a FaceRecognizer. Predicts the label and confidence for a given sample. Serializes this object to a given filename. Deserializes this object from a given filename. Serializes this object to a given cv::FileStorage. Deserializes this object from a given cv::FileNode. Sets string info for the specified model's label. The string info is replaced by the provided value if it was set before for the specified label. Gets string information by label. If an unknown label id is provided or there is no label information associated with the specified label id the method returns an empty string. Gets vector of labels by string. The function searches for the labels containing the specified sub-string in the associated string info. threshold parameter accessor - required for default BestMinDist collector Sets threshold of model Training and prediction must be done on grayscale images, use cvtColor to convert between the color spaces. - **THE FISHERFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE. ** (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown.Use resize to resize the images. - This model does not support updating. Releases managed resources Training and prediction must be done on grayscale images, use cvtColor to convert between the color spaces. - **THE FISHERFACES METHOD MAKES THE ASSUMPTION, THAT THE TRAINING AND TEST IMAGES ARE OF EQUAL SIZE. ** (caps-lock, because I got so many mails asking for this). You have to make sure your input data has the correct shape, else a meaningful exception is thrown.Use resize to resize the images. - This model does not support updating. The number of components (read: Fisherfaces) kept for this Linear Discriminant Analysis with the Fisherfaces criterion. It's useful to keep all components, that means the number of your classes c (read: subjects, persons you want to recognize). If you leave this at the default (0) or set it to a value less-equal 0 or greater (c-1), it will be set to the correct number (c-1) automatically. The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1. The Circular Local Binary Patterns (used in training and prediction) expect the data given as grayscale images, use cvtColor to convert between the color spaces. This model supports updating. Releases managed resources The Circular Local Binary Patterns (used in training and prediction) expect the data given as grayscale images, use cvtColor to convert between the color spaces. This model supports updating. The radius used for building the Circular Local Binary Pattern. The greater the radius, the The number of sample points to build a Circular Local Binary Pattern from. An appropriate value is to use `8` sample points.Keep in mind: the more sample points you include, the higher the computational cost. The number of cells in the horizontal direction, 8 is a common value used in publications. The more cells, the finer the grid, the higher the dimensionality of the resulting feature vector. The number of cells in the vertical direction, 8 is a common value used in publications. The more cells, the finer the grid, the higher the dimensionality of the resulting feature vector. The threshold applied in the prediction. If the distance to the nearest neighbor is larger than the threshold, this method returns -1. Detects corners using the AGAST algorithm The AgastFeatureDetector constructor threshold on difference between intensity of the central pixel and pixels of a circle around this pixel. if true, non-maximum suppression is applied to detected corners (keypoints). Releases managed resources threshold on difference between intensity of the central pixel and pixels of a circle around this pixel. if true, non-maximum suppression is applied to detected corners (keypoints). type one of the four neighborhoods as defined in the paper AGAST type one of the four neighborhoods as defined in the paper Class implementing the AKAZE keypoint detector and descriptor extractor, described in @cite ANB13 AKAZE descriptors can only be used with KAZE or AKAZE keypoints. Try to avoid using *extract* and *detect* instead of *operator()* due to performance reasons. .. [ANB13] Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, Jesús Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013. The AKAZE constructor Releases managed resources Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. Creates instance by cv::Ptr<T> Creates instance by raw pointer T* Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Releases managed resources Releases managed resources Return true if the matcher supports mask in match methods. Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. The constructor. Descriptor extractor that is used to compute descriptors for an input image and its keypoints. Descriptor matcher that is used to find the nearest word of the trained vocabulary for each keypoint descriptor of the image. The constructor. Descriptor matcher that is used to find the nearest word of the trained vocabulary for each keypoint descriptor of the image. Releases unmanaged resources Sets a visual vocabulary. Vocabulary (can be trained using the inheritor of BOWTrainer ). Each row of the vocabulary is a visual word(cluster center). Returns the set vocabulary. Computes an image descriptor using the set visual vocabulary. Image, for which the descriptor is computed. Keypoints detected in the input image. Computed output image descriptor. pointIdxsOfClusters Indices of keypoints that belong to the cluster. This means that pointIdxsOfClusters[i] are keypoint indices that belong to the i -th cluster(word of vocabulary) returned if it is non-zero. Descriptors of the image keypoints that are returned if they are non-zero. Computes an image descriptor using the set visual vocabulary. Computed descriptors to match with vocabulary. Computed output image descriptor. Indices of keypoints that belong to the cluster. This means that pointIdxsOfClusters[i] are keypoint indices that belong to the i -th cluster(word of vocabulary) returned if it is non-zero. Computes an image descriptor using the set visual vocabulary. Image, for which the descriptor is computed. Keypoints detected in the input image. Computed output image descriptor. Returns an image descriptor size if the vocabulary is set. Otherwise, it returns 0. Returns an image descriptor type. Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. Releases unmanaged resources Clusters train descriptors. Clusters train descriptors. Descriptors to cluster. Each row of the descriptors matrix is a descriptor. Descriptors are not added to the inner train descriptor set. The vocabulary consists of cluster centers. So, this method returns the vocabulary. In the first variant of the method, train descriptors stored in the object are clustered.In the second variant, input descriptors are clustered. Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. Adds descriptors to a training set. descriptors Descriptors to add to a training set. Each row of the descriptors matrix is a descriptor. The training set is clustered using clustermethod to construct the vocabulary. Returns a training set of descriptors. Returns the count of all descriptors stored in the training set. Clusters train descriptors. Clusters train descriptors. Descriptors to cluster. Each row of the descriptors matrix is a descriptor. Descriptors are not added to the inner train descriptor set. The vocabulary consists of cluster centers. So, this method returns the vocabulary. In the first variant of the method, train descriptors stored in the object are clustered.In the second variant, input descriptors are clustered. BRISK implementation custom setup Releases managed resources Create descriptor matcher by type name. Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Creates instance from raw pointer T* Releases managed resources Add descriptors to train descriptor collection. Descriptors to add. Each descriptors[i] is a descriptors set from one image. Get train descriptors collection. Clear train descriptors collection. Return true if there are not train descriptors in collection. Return true if the matcher supports mask in match methods. Train matcher (e.g. train flann index). In all methods to match the method train() is run every time before matching. Some descriptor matchers (e.g. BruteForceMatcher) have empty implementation of this method, other matchers really train their inner structures (e.g. FlannBasedMatcher trains flann::Index). So nonempty implementation of train() should check the class object state and do traing/retraining only if the state requires that (e.g. FlannBasedMatcher trains flann::Index if it has not trained yet or if new descriptors have been added to the train collection). Find one best match for each query descriptor (if mask is empty). Find k best matches for each query descriptor (in increasing order of distances). compactResult is used when mask is not empty. If compactResult is false matches vector will have the same size as queryDescriptors rows. If compactResult is true matches vector will not contain matches for fully masked out query descriptors. Find best matches for each query descriptor which have distance less than maxDistance (in increasing order of distances). Find one best match for each query descriptor (if mask is empty). Find k best matches for each query descriptor (in increasing order of distances). compactResult is used when mask is not empty. If compactResult is false matches vector will have the same size as queryDescriptors rows. If compactResult is true matches vector will not contain matches for fully masked out query descriptors. Find best matches for each query descriptor which have distance less than maxDistance (in increasing order of distances). cv::AKAZE descriptor type Upright descriptors, not invariant to rotation Upright descriptors, not invariant to rotation Output image matrix will be created (Mat::create), i.e. existing memory of output image may be reused. Two source image, matches and single keypoints will be drawn. For each keypoint only the center point will be drawn (without the circle around keypoint with keypoint size and orientation). Output image matrix will not be created (Mat::create). Matches will be drawn on existing content of output image. Single keypoints will not be drawn. For each keypoint the circle around keypoint with keypoint size and orientation will be drawn. AGAST type one of the four neighborhoods as defined in the paper cv::KAZE diffusivity type cv::ORB score flags Detects corners using FAST algorithm by E. Rosten Releases managed resources Abstract base class for 2D image feature detectors and descriptor extractors Return true if detector object is empty Detect keypoints in an image. The image. Mask specifying where to look for keypoints (optional). Must be a char matrix with non-zero values in the region of interest. The detected keypoints. Detect keypoints in an image. The image. Mask specifying where to look for keypoints (optional). Must be a char matrix with non-zero values in the region of interest. The detected keypoints. Detect keypoints in an image set. Image collection. Masks for image set. masks[i] is a mask for images[i]. Collection of keypoints detected in an input images. keypoints[i] is a set of keypoints detected in an images[i]. Compute the descriptors for a set of keypoints in an image. The image. The input keypoints. Keypoints for which a descriptor cannot be computed are removed. Copmputed descriptors. Row i is the descriptor for keypoint i.param> Compute the descriptors for a keypoints collection detected in image collection. Image collection. Input keypoints collection. keypoints[i] is keypoints detected in images[i]. Keypoints for which a descriptor cannot be computed are removed. Descriptor collection. descriptors[i] are descriptors computed for set keypoints[i]. Detects keypoints and computes the descriptors Brute-force descriptor matcher. For each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. Creates instance by cv::Ptr<T> Creates instance by raw pointer T* Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Releases managed resources Releases managed resources Return true if the matcher supports mask in match methods. Add descriptors to train descriptor collection. Descriptors to add. Each descriptors[i] is a descriptors set from one image. Clear train descriptors collection. Train matcher (e.g. train flann index). In all methods to match the method train() is run every time before matching. Some descriptor matchers (e.g. BruteForceMatcher) have empty implementation of this method, other matchers really train their inner structures (e.g. FlannBasedMatcher trains flann::Index). So nonempty implementation of train() should check the class object state and do traing/retraining only if the state requires that (e.g. FlannBasedMatcher trains flann::Index if it has not trained yet or if new descriptors have been added to the train collection). Good Features To Track Detector Releases managed resources Class implementing the KAZE keypoint detector and descriptor extractor The KAZE constructor Set to enable extraction of extended (128-byte) descriptor. Set to enable use of upright descriptors (non rotation-invariant). Detector response threshold to accept point Maximum octave evolution of the image Default number of sublevels per scale level Diffusivity type. DIFF_PM_G1, DIFF_PM_G2, DIFF_WEICKERT or DIFF_CHARBONNIER Releases managed resources A class filters a vector of keypoints. Remove keypoints within borderPixels of an image edge. Remove keypoints of sizes out of range. Remove keypoints from some image by mask for pixels of this image. Remove duplicated keypoints. Retain the specified number of the best keypoints (according to the response) Maximal Stable Extremal Regions class Creates instance by raw pointer cv::MSER* Creates MSER parameters delta, in the code, it compares (size_{i}-size_{i-delta})/size_{i-delta} prune the area which smaller than min_area prune the area which bigger than max_area prune the area have simliar size to its children trace back to cut off mser with diversity < min_diversity for color image, the evolution steps the area threshold to cause re-initialize ignore too small margin the aperture size for edge blur Releases managed resources Class implementing the ORB (*oriented BRIEF*) keypoint detector and descriptor extractor described in @cite RRKB11 . The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation). Releases managed resources Class for extracting blobs from an image. SimpleBlobDetector parameters Releases managed resources The algorithm to use for selecting the initial centers when performing a k-means clustering step. picks the initial cluster centers randomly [flann_centers_init_t::CENTERS_RANDOM] picks the initial centers using Gonzales’ algorithm [flann_centers_init_t::CENTERS_GONZALES] picks the initial centers using the algorithm suggested in [arthur_kmeanspp_2007] [flann_centers_init_t::CENTERS_KMEANSPP] The FLANN nearest neighbor index class. Constructs a nearest neighbor search index for a given dataset. features – Matrix of type CV _ 32F containing the features(points) to index. The size of the matrix is num _ features x feature _ dimensionality. Structure containing the index parameters. The type of index that will be constructed depends on the type of this parameter. Releases unmanaged resources Performs a K-nearest neighbor search for multiple query points. The query points, one per row Indices of the nearest neighbors found Distances to the nearest neighbors found Number of nearest neighbors to search for Search parameters Performs a K-nearest neighbor search for multiple query points. The query points, one per row Indices of the nearest neighbors found Distances to the nearest neighbors found Number of nearest neighbors to search for Search parameters Performs a K-nearest neighbor search for multiple query points. The query points, one per row Indices of the nearest neighbors found Distances to the nearest neighbors found Number of nearest neighbors to search for Search parameters Performs a radius nearest neighbor search for a given query point. The query point Indices of the nearest neighbors found Distances to the nearest neighbors found Number of nearest neighbors to search for Search parameters Performs a radius nearest neighbor search for a given query point. The query point Indices of the nearest neighbors found Distances to the nearest neighbors found Number of nearest neighbors to search for Search parameters Performs a radius nearest neighbor search for a given query point. The query point Indices of the nearest neighbors found Distances to the nearest neighbors found Number of nearest neighbors to search for Search parameters Saves the index to a file. The file to save the index to hierarchical k-means tree. Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application. Specifies the importance of the index build time raported to the nearest-neighbor search time. In some applications it’s acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it’s required that the index be build as fast as possible even if that leads to slightly longer search times. Is used to specify the tradeoff between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage. Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters. When using a parameters object of this type the index created combines the randomized kd-trees and the hierarchical k-means tree. The number of parallel kd-trees to use. Good values are in the range [1..16] The branching factor to use for the hierarchical k-means tree The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence The algorithm to use for selecting the initial centers when performing a k-means clustering step. This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When cb_index is zero the next kmeans domain to be explored is choosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain. Releases managed resources When passing an object of this type the index constructed will consist of a set of randomized kd-trees which will be searched in parallel. The number of parallel kd-trees to use. Good values are in the range [1..16] When passing an object of this type the index constructed will be a hierarchical k-means tree. The branching factor to use for the hierarchical k-means tree The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence The algorithm to use for selecting the initial centers when performing a k-means clustering step. This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When cb_index is zero the next kmeans domain to be explored is choosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain. the index will perform a linear, brute-force search. When using a parameters object of this type the index created uses multi-probe LSH (by Multi-Probe LSH: Efficient Indexing for High-Dimensional Similarity Search by Qin Lv, William Josephson, Zhe Wang, Moses Charikar, Kai Li., Proceedings of the 33rd International Conference on Very Large Data Bases (VLDB). Vienna, Austria. September 2007) The number of hash tables to use (between 10 and 30 usually). The size of the hash key in bits (between 10 and 20 usually). The number of bits to shift to check for neighboring buckets (0 is regular LSH, 2 is recommended). This object type is used for loading a previously saved index from the disk. Delegate to be called every time mouse event occurs in the specified window. one of CV_EVENT_ x-coordinates of mouse pointer in image coordinates y-coordinates of mouse pointer in image coordinates a combination of CV_EVENT_FLAG Trackbar that is shown on CvWindow Constructor (value=0, max=100) Trackbar name Window name Callback handler Constructor (value=0, max=100) Trackbar name Window name Callback handler Constructor Trackbar name Window name Initial slider position The upper limit of the range this trackbar is working with. Callback handler Constructor Trackbar name Window name Initial slider position The upper limit of the range this trackbar is working with. Callback handler Releases unmanaged resources Name of this trackbar Name of parent window Gets or sets a numeric value that represents the current position of the scroll box on the track bar. Gets the upper limit of the range this trackbar is working with. Gets the callback delegate which occurs when the Value property of a track bar changes, either by movement of the scroll box or by manipulation in code. Sets the trackbar maximum position. The function sets the maximum position of the specified trackbar in the specified window. New maximum position. Sets the trackbar minimum position. The function sets the minimum position of the specified trackbar in the specified window. New minimum position. Delegate to be called every time the slider changes the position. Delegate to be called every time the slider changes the position. Button type flags (cv::createButton) The button will be a push button. The button will be a checkbox button. The button will be a radiobox button. The radiobox on the same buttonbar (same line) are exclusive; one on can be select at the time. Mouse events [EVENT_MOUSEMOVE] [EVENT_LBUTTONDOWN] [EVENT_RBUTTONDOWN] [CV_EVENT_MBUTTONDOWN] [EVENT_LBUTTONUP] [EVENT_RBUTTONUP] [EVENT_MBUTTONUP] [EVENT_LBUTTONDBLCLK] [EVENT_RBUTTONDBLCLK] [EVENT_MBUTTONDBLCLK] [EVENT_MOUSEWHEEL] [EVENT_MOUSEHWHEEL] [EVENT_FLAG_LBUTTON] [EVENT_FLAG_RBUTTON] [EVENT_FLAG_MBUTTON] [EVENT_FLAG_CTRLKEY] [EVENT_FLAG_SHIFTKEY] [EVENT_FLAG_ALTKEY] Flags for the window the user can resize the window (no constraint) / also use to switch a fullscreen window to a normal size the user cannot resize the window, the size is constrainted by the image displayed window with opengl support change the window to fullscreen the image expends as much as it can (no ratio constraint) the ratio of the image is respected Property identifiers for cvGetWindowProperty/cvSetWindowProperty fullscreen property (can be WINDOW_NORMAL or WINDOW_FULLSCREEN) autosize property (can be WINDOW_NORMAL or WINDOW_AUTOSIZE) window's aspect ration (can be set to WINDOW_FREERATIO or WINDOW_KEEPRATIO) opengl support Managed wrapper of all OpenCV functions 4つの文字からFOURCCの整数値を得る 4つの文字からFOURCCの整数値を得る 4つの文字からFOURCCの整数値を得る Wrapper of HighGUI window Creates a window with a random name Creates a window with a random name and a specified image Creates a window with a specified image and flag Flags of the window. Currently the only supported flag is WindowMode.AutoSize. If it is set, window size is automatically adjusted to fit the displayed image (see cvShowImage), while user can not change the window size manually. Creates a window Name of the window which is used as window identifier and appears in the window caption. Creates a window Name of the window which is used as window identifier and appears in the window caption. Flags of the window. Currently the only supported flag is WindowMode.AutoSize. If it is set, window size is automatically adjusted to fit the displayed image (see cvShowImage), while user can not change the window size manually. Creates a window Name of the window which is used as window identifier and appears in the window caption. Image to be shown. Creates a window Name of the window which is used as window identifier and appears in the window caption. Flags of the window. Currently the only supported flag is WindowMode.AutoSize. If it is set, window size is automatically adjusted to fit the displayed image (see cvShowImage), while user can not change the window size manually. Image to be shown. ウィンドウ名が指定されなかったときに、適当な名前を作成して返す. Releases managed resources Destroys this window. Destroys all the opened HighGUI windows. Gets or sets an image to be shown Gets window name Gets window handle Returns true if the library is compiled with Qt Creates the trackbar and attaches it to this window Name of created trackbar. the function to be called every time the slider changes the position. This function should be prototyped as void Foo(int); Creates the trackbar and attaches it to this window Name of created trackbar. the function to be called every time the slider changes the position. This function should be prototyped as void Foo(int); Creates the trackbar and attaches it to this window Name of created trackbar. The position of the slider Maximal position of the slider. Minimal position is always 0. the function to be called every time the slider changes the position. This function should be prototyped as void Foo(int); Creates the trackbar and attaches it to this window Name of created trackbar. The position of the slider Maximal position of the slider. Minimal position is always 0. the function to be called every time the slider changes the position. This function should be prototyped as void Foo(int); Creates the trackbar and attaches it to this window Name of created trackbar. The position of the slider Maximal position of the slider. Minimal position is always 0. the function to be called every time the slider changes the position. This function should be prototyped as void Foo(int); Display text on the window's image as an overlay for delay milliseconds. This is not editing the image's data. The text is display on the top of the image. Overlay text to write on the window’s image Delay to display the overlay text. If this function is called before the previous overlay text time out, the timer is restarted and the text updated. . If this value is zero, the text never disapers. Text to write on the window’s statusbar Delay to display the text. If this function is called before the previous text time out, the timer is restarted and the text updated. If this value is zero, the text never disapers. Get Property of the window Property identifier Value of the specified property Load parameters of the window. Sets window position New x coordinate of top-left corner New y coordinate of top-left corner Sets window size New width New height Save parameters of the window. Set Property of the window Property identifier New value of the specified property Shows the image in this window Image to be shown. Waits for a pressed key Key code Waits for a pressed key Delay in milliseconds. Key code Waits for a pressed key. Similar to #waitKey, but returns full key code. Key code is implementation specific and depends on used backend: QT/GTK/Win32/etc Delay in milliseconds. 0 is the special value that means ”forever” Returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed. Retrieves a created window by name Sets the callback function for mouse events occuting within the specified window. Reference to the function to be called every time mouse event occurs in the specified window. Specifies colorness and Depth of the loaded image If set, return the loaded image as is (with alpha channel, otherwise it gets cropped). If set, always convert image to the single channel grayscale image. If set, always convert image to the 3 channel BGR color image. If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit. If set, the image is read in any possible color format. If set, use the gdal driver for loading the image. If set, always convert image to the single channel grayscale image and the image size reduced 1/2. If set, always convert image to the 3 channel BGR color image and the image size reduced 1/2. If set, always convert image to the single channel grayscale image and the image size reduced 1/4. If set, always convert image to the 3 channel BGR color image and the image size reduced 1/4. If set, always convert image to the single channel grayscale image and the image size reduced 1/8. If set, always convert image to the 3 channel BGR color image and the image size reduced 1/8. If set, do not rotate the image according to EXIF's orientation flag. store as HALF (FP16) store as FP32 (default) The format type IDs for cv::imwrite and cv::inencode For JPEG, it can be a quality from 0 to 100 (the higher is the better). Default value is 95. Enable JPEG features, 0 or 1, default is False. Enable JPEG features, 0 or 1, default is False. JPEG restart interval, 0 - 65535, default is 0 - no restart. Separate luma quality level, 0 - 100, default is 0 - don't use. Separate chroma quality level, 0 - 100, default is 0 - don't use. For PNG, it can be the compression level from 0 to 9. A higher value means a smaller size and longer compression time. Default value is 3. One of cv::ImwritePNGFlags, default is IMWRITE_PNG_StrategyDEFAULT. Binary level PNG, 0 or 1, default is 0. For PPM, PGM, or PBM, it can be a binary format flag, 0 or 1. Default value is 1. [48] override EXR storage type (FLOAT (FP32) is default) For WEBP, it can be a quality from 1 to 100 (the higher is the better). By default (without any parameter) and for quality above 100 the lossless compression is used. For PAM, sets the TUPLETYPE field to the corresponding string value that is defined for the format For TIFF, use to specify which DPI resolution unit to set; see libtiff documentation for valid values For TIFF, use to specify the X direction DPI For TIFF, use to specify the Y direction DPI Imwrite PAM specific tupletype flags used to define the 'TUPETYPE' field of a PAM file. Imwrite PNG specific flags used to tune the compression algorithm. These flags will be modify the way of PNG image compression and will be passed to the underlying zlib processing stage. The effect of IMWRITE_PNG_StrategyFILTERED is to force more Huffman coding and less string matching; it is somewhat intermediate between IMWRITE_PNG_StrategyDEFAULT and IMWRITE_PNG_StrategyHUFFMAN_ONLY. IMWRITE_PNG_StrategyRLE is designed to be almost as fast as IMWRITE_PNG_StrategyHUFFMAN_ONLY, but give better compression for PNG image data. The strategy parameter only affects the compression ratio but not the correctness of the compressed output even if it is not set appropriately. IMWRITE_PNG_StrategyFIXED prevents the use of dynamic Huffman codes, allowing for a simpler decoder for special applications. Use this value for normal data. Use this value for data produced by a filter (or predictor).Filtered data consists mostly of small values with a somewhat random distribution. In this case, the compression algorithm is tuned to compress them better. Use this value to force Huffman encoding only (no string match). Use this value to limit match distances to one (run-length encoding). Using this value prevents the use of dynamic Huffman codes, allowing for a simpler decoder for special applications. The format-specific save parameters for cv::imwrite and cv::imencode format type ID value of parameter Constructor format type ID value of parameter Contrast Limited Adaptive Histogram Equalization cv::Ptr<CLAHE> Creates a predefined CLAHE object Releases managed resources connected components that is returned from Cv2.ConnectedComponentsEx All blobs destination labeled value The number of labels -1 Constructor Filter a image with the specified label value. Source image. Destination image. Label value. Filtered image. Filter a image with the specified label values. Source image. Destination image. Label values. Filtered image. Filter a image with the specified blob object. Source image. Destination image. Blob value. Filtered image. Filter a image with the specified blob objects. Source image. Destination image. Blob values. Filtered image. Draws all blobs to the specified image. The target image to be drawn. Find the largest blob. the largest blob 指定したラベル値のところのみを非0で残したマスク画像を返す One blob Label value Floating point centroid (x,y) The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction. The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction. The horizontal size of the bounding box. The vertical size of the bounding box. The bounding box. The total area (in pixels) of the connected component. Adaptive thresholding algorithms It is a mean of block_size × block_size pixel neighborhood, subtracted by param1. it is a weighted sum (Gaussian) of block_size × block_size pixel neighborhood, subtracted by param1. Type of the border to create around the copied source image rectangle Border is filled with the fixed value, passed as last parameter of the function. `iiiiii|abcdefgh|iiiiiii` with some specified `i` The pixels from the top and bottom rows, the left-most and right-most columns are replicated to fill the border. `aaaaaa|abcdefgh|hhhhhhh` `fedcba|abcdefgh|hgfedcb` `cdefgh|abcdefgh|abcdefg` `gfedcb|abcdefgh|gfedcba` `uvwxyz|absdefgh|ijklmno` same as BORDER_REFLECT_101 do not look outside of ROI Color conversion operation for cv::cvtColor GNU Octave/MATLAB equivalent colormaps components algorithm output formats The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction. The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction. The horizontal size of the bounding box The vertical size of the bounding box The total area (in pixels) of the connected component Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation). CHAIN_APPROX_NONE - translate all the points from the chain code into points; CHAIN_APPROX_SIMPLE - compress horizontal, vertical, and diagonal segments, that is, the function leaves only their ending points; CHAIN_APPROX_TC89_L1 - apply one of the flavors of Teh-Chin chain approximation algorithm. CHAIN_APPROX_TC89_KCOS - apply one of the flavors of Teh-Chin chain approximation algorithm. Mask size for distance transform 3 5 distanceTransform algorithm flags each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label each zero pixel (and all the non-zero pixels closest to it) gets its own label. Type of distance for cvDistTransform User defined distance [CV_DIST_USER] distance = |x1-x2| + |y1-y2| [CV_DIST_L1] the simple euclidean distance [CV_DIST_L2] distance = max(|x1-x2|,|y1-y2|) [CV_DIST_C] L1-L2 metric: distance = 2(sqrt(1+x*x/2) - 1)) [CV_DIST_L12] distance = c^2(|x|/c-log(1+|x|/c)), c = 1.3998 [CV_DIST_FAIR] distance = c^2/2(1-exp(-(x/c)^2)), c = 2.9846 [CV_DIST_WELSCH] distance = |x|<c ? x^2/2 : c(|x|-c/2), c=1.345 [CV_DIST_HUBER] Specifies how to flip the array means flipping around x-axis means flipping around y-axis means flipping around both axises floodFill Operation flags. Lower bits contain a connectivity value, 4 (default) or 8, used within the function. Connectivity determines which neighbors of a pixel are considered. Upper bits can be 0 or a combination of the following flags: 4-connected line. [= 4] 8-connected line. [= 8] If set, the difference between the current pixel and seed pixel is considered. Otherwise, the difference between neighbor pixels is considered (that is, the range is floating). [CV_FLOODFILL_FIXED_RANGE] If set, the function does not change the image ( newVal is ignored), but fills the mask. The flag can be used for the second variant only. [CV_FLOODFILL_MASK_ONLY] class of the pixel in GrabCut algorithm an obvious background pixels an obvious foreground (object) pixel a possible background pixel a possible foreground pixel GrabCut algorithm flags The function initializes the state and the mask using the provided rectangle. After that it runs iterCount iterations of the algorithm. The function initializes the state using the provided mask. Note that GC_INIT_WITH_RECT and GC_INIT_WITH_MASK can be combined. Then, all the pixels outside of the ROI are automatically initialized with GC_BGD . The value means that the algorithm should just resume. Comparison methods for cvCompareHist Correlation [CV_COMP_CORREL] Chi-Square [CV_COMP_CHISQR] Intersection [CV_COMP_INTERSECT] Bhattacharyya distance [CV_COMP_BHATTACHARYYA] Synonym for HISTCMP_BHATTACHARYYA Alternative Chi-Square \f[d(H_1,H_2) = 2 * \sum _I \frac{\left(H_1(I)-H_2(I)\right)^2}{H_1(I)+H_2(I)}\f] This alternative formula is regularly used for texture comparison. See e.g. @cite Puzicha1997 Kullback-Leibler divergence \f[d(H_1,H_2) = \sum _I H_1(I) \log \left(\frac{H_1(I)}{H_2(I)}\right)\f] Variants of a Hough transform classical or standard Hough transform. Every line is represented by two floating-point numbers \f$(\rho, \theta)\f$ , where \f$\rho\f$ is a distance between (0,0) point and the line, and \f$\theta\f$ is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type. multi-scale variant of the classical Hough transform. The lines are encoded the same way as HOUGH_STANDARD. basically *21HT*, described in @cite Yuen90 Interpolation algorithm Nearest-neighbor interpolation, Bilinear interpolation (used by default) Bicubic interpolation. Resampling using pixel area relation. It is the preferred method for image decimation that gives moire-free results. In case of zooming it is similar to CV_INTER_NN method. Lanczos interpolation over 8x8 neighborhood mask for interpolation codes Fill all the destination image pixels. If some of them correspond to outliers in the source image, they are set to fillval. Indicates that matrix is inverse transform from destination image to source and, thus, can be used directly for pixel interpolation. Otherwise, the function finds the inverse transform from map_matrix. Variants of Line Segment %Detector No refinement applied Standard refinement is applied. E.g. breaking arches into smaller straighter line approximations. Advanced refinement. Number of false alarms is calculated, lines are refined through increase of precision, decrement in size, etc. Type of the line 8-connected line. 4-connected line. Antialiased line. Marker styles for Mat.DrawMarker A circle polyline A filled circle A cross A tilted cross A circle and a cross A circle and a tilted cross A diamond polyline A filled diamond A square polyline A filledsquare Shape of the structuring element A rectangular element A cross-shaped element An elliptic element Type of morphological operation an opening operation a closing operation Morphological gradient "Top hat" "Black hat" "hit and miss" PixelConnectivity for LineIterator Connectivity 4 (N,S,E,W) Connectivity 8 (N,S,E,W,NE,SE,SW,NW) cv::initWideAngleProjMap flags types of intersection between rectangles No intersection There is a partial intersection One of the rectangle is fully enclosed in the other mode of the contour retrieval algorithm retrieves only the extreme outer contours. It sets `hierarchy[i][2]=hierarchy[i][3]=-1` for all the contours. retrieves all of the contours without establishing any hierarchical relationships. retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level. retrieves all of the contours and reconstructs a full hierarchy of nested contours. Comparison methods for cv::matchShapes \f[I_1(A,B) = \sum _{i=1...7} \left | \frac{1}{m^A_i} - \frac{1}{m^B_i} \right |\f] \f[I_2(A,B) = \sum _{i=1...7} \left | m^A_i - m^B_i \right |\f] \f[I_3(A,B) = \max _{i=1...7} \frac{ \left| m^A_i - m^B_i \right| }{ \left| m^A_i \right| }\f] Specifies the way the template must be compared with image regions \f[R(x,y)= \sum _{x',y'} (T(x',y')-I(x+x',y+y'))^2\f] \f[R(x,y)= \frac{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f] \f[R(x,y)= \sum _{x',y'} (T(x',y') \cdot I(x+x',y+y'))\f] \f[R(x,y)= \frac{\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y'))}{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}}\f] \f[R(x,y)= \sum _{x',y'} (T'(x',y') \cdot I'(x+x',y+y'))\f] where \f[\begin{array}{l} T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum _{x'',y''} T(x'',y'') \\ I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum _{x'',y''} I(x+x'',y+y'') \end{array}\f] \f[R(x,y)= \frac{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) }{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} }\f] Thresholding type \f[\texttt{dst} (x,y) = \fork{\texttt{maxval}}{if \(\texttt{src}(x,y) > \texttt{thresh}\)}{0}{otherwise}\f] \f[\texttt{dst} (x,y) = \fork{0}{if \(\texttt{src}(x,y) > \texttt{thresh}\)}{\texttt{maxval}}{otherwise}\f] \f[\texttt{dst} (x,y) = \fork{\texttt{threshold}}{if \(\texttt{src}(x,y) > \texttt{thresh}\)}{\texttt{src}(x,y)}{otherwise}\f] \f[\texttt{dst} (x,y) = \fork{\texttt{src}(x,y)}{if \(\texttt{src}(x,y) > \texttt{thresh}\)}{0}{otherwise}\f] \f[\texttt{dst} (x,y) = \fork{0}{if \(\texttt{src}(x,y) > \texttt{thresh}\)}{\texttt{src}(x,y)}{otherwise}\f] flag, use Otsu algorithm to choose the optimal threshold value flag, use Triangle algorithm to choose the optimal threshold value finds arbitrary template in the grayscale image using Generalized Hough Transform Canny low threshold. Canny high threshold. Minimum distance between the centers of the detected objects. Inverse ratio of the accumulator resolution to the image resolution. Maximal size of inner buffers. set template to search set template to search find template on image find template on image Ballard, D.H. (1981). Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognition 13 (2): 111-122. Detects position only without traslation and rotation cv::Ptr<T> object Creates a predefined GeneralizedHoughBallard object Releases managed resources R-Table levels. The accumulator threshold for the template centers at the detection stage. The smaller it is, the more false positions may be detected. Guil, N., González-Linares, J.M. and Zapata, E.L. (1999). Bidimensional shape detection using an invariant approach. Pattern Recognition 32 (6): 1025-1038. Detects position, traslation and rotation cv::Ptr<T> object Creates a predefined GeneralizedHoughBallard object Releases managed resources Angle difference in degrees between two points in feature. Feature table levels. Maximal difference between angles that treated as equal. Minimal rotation angle to detect in degrees. Maximal rotation angle to detect in degrees. Angle step in degrees. Angle votes threshold. Minimal scale to detect. Maximal scale to detect. Scale step. Scale votes threshold. Position votes threshold. Contrast Limited Adaptive Histogram Equalization Constructor Intializes the iterator Releases unmanaged resources LineIterator pixel data Constructor Line segment detector class cv::Ptr<LineSegmentDetector> Creates a smart pointer to a LineSegmentDetector object and initializes it. The way found lines will be refined, see cv::LineSegmentDetectorModes The scale of the image that will be used to find the lines. Range (0..1]. Sigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale. Bound to the quantization error on the gradient norm. Gradient angle tolerance in degrees. Detection threshold: -log10(NFA) \> log_eps. Used only when advancent refinement is chosen. Minimal density of aligned region points in the enclosing rectangle. Number of bins in pseudo-ordering of gradient modulus. Releases managed resources Finds lines in the input image. This is the output of the default parameters of the algorithm on the above shown image. A grayscale (CV_8UC1) input image. A vector of Vec4i or Vec4f elements specifying the beginning and ending point of a line. Where Vec4i/Vec4f is (x1, y1, x2, y2), point 1 is the start, point 2 - end. Returned lines are strictly oriented depending on the gradient. Vector of widths of the regions, where the lines are found. E.g. Width of line. Vector of precisions with which the lines are found. Vector containing number of false alarms in the line region, with precision of 10%. The bigger the value, logarithmically better the detection. Finds lines in the input image. This is the output of the default parameters of the algorithm on the above shown image. A grayscale (CV_8UC1) input image. A vector of Vec4i or Vec4f elements specifying the beginning and ending point of a line. Where Vec4i/Vec4f is (x1, y1, x2, y2), point 1 is the start, point 2 - end. Returned lines are strictly oriented depending on the gradient. Vector of widths of the regions, where the lines are found. E.g. Width of line. Vector of precisions with which the lines are found. Vector containing number of false alarms in the line region, with precision of 10%. The bigger the value, logarithmically better the detection. Draws the line segments on a given image. The image, where the liens will be drawn. Should be bigger or equal to the image, where the lines were found. A vector of the lines that needed to be drawn. Draws two groups of lines in blue and red, counting the non overlapping (mismatching) pixels. The size of the image, where lines1 and lines2 were found. The first group of lines that needs to be drawn. It is visualized in blue color. The second group of lines. They visualized in red color. Optional image, where the lines will be drawn. The image should be color(3-channel) in order for lines1 and lines2 to be drawn in the above mentioned colors. circle structure retrieved from cvHoughCircle Center coordinate of the circle Radius Constructor center radius Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Information about the image topology for cv::findContours 2-dimentional line vector The X component of the normalized vector collinear to the line The Y component of the normalized vector collinear to the line X-coordinate of some point on the line Y-coordinate of some point on the line Initializes this object The X component of the normalized vector collinear to the line The Y component of the normalized vector collinear to the line Z-coordinate of some point on the line Z-coordinate of some point on the line Initializes by cvFitLine output The returned value from cvFitLineparam> Returns the distance between this line and the specified point Returns the distance between this line and the specified point Returns the distance between this line and the specified point Returns the distance between this line and the specified point Fits this line to the specified size (for drawing) Width of fit size Height of fit size 1st edge point of fitted line 2nd edge point of fitted line A 3-dimensional line object The X component of the normalized vector collinear to the line The Y component of the normalized vector collinear to the line The Z component of the normalized vector collinear to the line X-coordinate of some point on the line Y-coordinate of some point on the line Z-coordinate of some point on the line Initializes this object The X component of the normalized vector collinear to the line The Y component of the normalized vector collinear to the line The Z component of the normalized vector collinear to the line Z-coordinate of some point on the line Z-coordinate of some point on the line Z-coordinate of some point on the line Initializes by cvFitLine output The returned value from cvFitLineparam> Returns the distance between this line and the specified point Returns the distance between this line and the specified point Returns the distance between this line and the specified point Returns the distance between this line and the specified point ベクトルの外積 ベクトルの長さ(原点からの距離) 2点間(2ベクトル)の距離 Line segment structure retrieved from cvHoughLines2 1st Point 2nd Point Constructor 1st Point 2nd Point Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Calculates a intersection of the specified two lines Calculates a intersection of the specified two lines Calculates a intersection of the specified two segments Calculates a intersection of the specified two segments Returns a boolean value indicating whether the specified two segments intersect. Returns a boolean value indicating whether the specified two segments intersect. Returns a boolean value indicating whether a line and a segment intersect. Line Segment Calculates a intersection of a line and a segment Translates the Point by the specified amount. The amount to offset the x-coordinate. The amount to offset the y-coordinate. Translates the Point by the specified amount. The Point used offset this CvPoint. Polar line segment retrieved from cvHoughLines2 Length of the line Angle of the line (radian) Constructor Length of the line Angle of the line (radian) Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Compares two CvPoint objects. The result specifies whether the members of each object are equal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are equal; otherwise, false. Compares two CvPoint objects. The result specifies whether the members of each object are unequal. A Point to compare. A Point to compare. This operator returns true if the members of left and right are unequal; otherwise, false. Specifies whether this object contains the same members as the specified Object. The Object to test. This method returns true if obj is the same type as this object and has the same members as this object. Returns a hash code for this object. An integer value that specifies a hash value for this object. Converts this object to a human readable string. A string that represents this object. Calculates a intersection of the specified two lines Calculates a intersection of the specified two lines CvLineSegmentPointに変換する 指定したx座標を両端とするような線分に変換する 指定したy座標を両端とするような線分に変換する 指定したy座標を通るときのx座標を求める 指定したx座標を通るときのy座標を求める Raster image moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments spatial moments central moments central moments central moments central moments central moments central moments central moments central normalized moments central normalized moments central normalized moments central normalized moments central normalized moments central normalized moments central normalized moments Default constructor. All moment values are set to 0. Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (single-channel, 8-bit or floating-point 2D array) or an array ( 1xN or Nx1 ) of 2D points ( Point or Point2f ) If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (8-bit) 2D array If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (floating-point) 2D array If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. Array of 2D points If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. Array of 2D points If it is true, then all the non-zero image pixels are treated as 1’s Calculates all of the moments up to the third order of a polygon or rasterized shape. A raster image (single-channel, 8-bit or floating-point 2D array) or an array ( 1xN or Nx1 ) of 2D points ( Point or Point2f ) If it is true, then all the non-zero image pixels are treated as 1’s computes 7 Hu invariants from the moments Default constructor Subdiv2D Constructor Clean up any resources being used. Releases unmanaged resources Computes average hash value of the input image. This is a fast image hashing algorithm, but only work on simple case. For more details, please refer to @cite lookslikeit cv::Ptr<T> Constructor Releases managed resources input image want to compute hash value, type should be CV_8UC4, CV_8UC3 or CV_8UC1. Hash value of input, it will contain 16 hex decimal number, return type is CV_8U Image hash based on block mean. cv::Ptr<T> Create BlockMeanHash object Releases managed resources Computes block mean hash of the input image input image want to compute hash value, type should be CV_8UC4, CV_8UC3 or CV_8UC1. Hash value of input, it will contain 16 hex decimal number, return type is CV_8U Image hash based on color moments. cv::Ptr<T> Constructor Releases managed resources Computes color moment hash of the input, the algorithm is come from the paper "Perceptual Hashing for Color Images Using Invariant Moments" input image want to compute hash value, type should be CV_8UC4, CV_8UC3 or CV_8UC1. 42 hash values with type CV_64F(double) use fewer block and generate 16*16/8 uchar hash value use block blocks(step sizes/2), generate 31*31/8 + 1 uchar hash value The base class for image hash algorithms Computes hash of the input image input image want to compute hash value hash of the image Compare the hash value between inOne and inTwo Hash value one Hash value two value indicate similarity between inOne and inTwo, the meaning of the value vary from algorithms to algorithms Marr-Hildreth Operator Based Hash, slowest but more discriminative. cv::Ptr<T> Create BlockMeanHash object int scale factor for marr wavelet (default=2). int level of scale factor (default = 1) Releases managed resources int scale factor for marr wavelet (default=2). int level of scale factor (default = 1) int scale factor for marr wavelet (default=2). int level of scale factor (default = 1) Computes average hash value of the input image input image want to compute hash value, type should be CV_8UC4, CV_8UC3, CV_8UC1. Hash value of input, it will contain 16 hex decimal number, return type is CV_8U pHash: Slower than average_hash, but tolerant of minor modifications. This algorithm can combat more variation than averageHash, for more details please refer to @cite lookslikeit cv::Ptr<T> Constructor Releases managed resources Computes pHash value of the input image input image want to compute hash value, type should be CV_8UC4, CV_8UC3, CV_8UC1. Hash value of input, it will contain 8 uchar value Image hash based on Radon transform. cv::Ptr<T> Create BlockMeanHash object Gaussian kernel standard deviation The number of angles to consider Releases managed resources Gaussian kernel standard deviation The number of angles to consider Computes average hash value of the input image input image want to compute hash value, type should be CV_8UC4, CV_8UC3, CV_8UC1. Hash value of input Artificial Neural Networks - Multi-Layer Perceptrons. Creates instance by raw pointer cv::ml::ANN_MLP* Creates the empty model. Loads and creates a serialized ANN from a file. Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources Termination criteria of the training algorithm. Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1. Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1. Initial value Delta_0 of update-values Delta_{ij}. Default value is 0.1. Increase factor eta^+. It must be >1. Default value is 1.2. Decrease factor eta^-. It must be \>1. Default value is 0.5. Update-values lower limit Delta_{min}. It must be positive. Default value is FLT_EPSILON. Update-values upper limit Delta_{max}. It must be >1. Default value is 50. Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer.Default value is empty Mat. Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. possible activation functions Identity function: $f(x)=x Symmetrical sigmoid: f(x)=\beta*(1-e^{-\alpha x})/(1+e^{-\alpha x} Gaussian function: f(x)=\beta e^{-\alpha x*x} Train options Update the network weights, rather than compute them from scratch. In the latter case the weights are initialized using the Nguyen-Widrow algorithm. Do not normalize the input vectors. If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation equal to 1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case, you should take care of proper normalization. Do not normalize the output vectors. If the flag is not set, the training algorithm normalizes each output feature independently, by transforming it to the certain range depending on the used activation function. Available training methods The back-propagation algorithm. The RPROP algorithm. See @cite RPROP93 for details. Boosted tree classifier derived from DTrees Creates instance by raw pointer cv::ml::Boost* Creates the empty model. Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources Type of the boosting algorithm. See Boost::Types. Default value is Boost::REAL. The number of weak classifiers. Default value is 100. A threshold between 0 and 1 used to save computational time. Samples with summary weight \f$\leq 1 - weight_trim_rate do not participate in the *next* iteration of training. Set this parameter to 0 to turn off this functionality. Default value is 0.95. Boosting type. Gentle AdaBoost and Real AdaBoost are often the preferable choices. Discrete AdaBoost. Real AdaBoost. It is a technique that utilizes confidence-rated predictions and works well with categorical data. LogitBoost. It can produce good regression fits. Gentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data. Decision tree Creates instance by raw pointer cv::ml::SVM* Creates the empty model. Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources Cluster possible values of a categorical variable into K < =maxCategories clusters to find a suboptimal split. The maximum possible depth of the tree. If the number of samples in a node is less than this parameter then the node will not be split. Default value is 10. If CVFolds \> 1 then algorithms prunes the built decision tree using K-fold cross-validation procedure where K is equal to CVFolds. Default value is 10. If true then surrogate splits will be built. These splits allow to work with missing data and compute variable importance correctly. Default value is false. If true then a pruning will be harsher. This will make a tree more compact and more resistant to the training data noise but a bit less accurate. Default value is true. If true then pruned branches are physically removed from the tree. Otherwise they are retained and it is possible to get results from the original unpruned (or pruned less aggressively) tree. Default value is true. Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split further. Default value is 0.01f. The array of a priori class probabilities, sorted by the class label value. Returns indices of root nodes Returns all the nodes. all the node indices are indices in the returned vector Returns all the splits. all the split indices are indices in the returned vector Returns all the bitsets for categorical splits. Split::subsetOfs is an offset in the returned vector The class represents a decision tree node. Value at the node: a class label in case of classification or estimated function value in case of regression. Class index normalized to 0..class_count-1 range and assigned to the node. It is used internally in classification trees and tree ensembles. Index of the parent node Index of the left child node Index of right child node Default direction where to go (-1: left or +1: right). It helps in the case of missing values. Index of the first split The class represents split in a decision tree. Index of variable on which the split is created. If not 0, then the inverse split rule is used (i.e. left and right branches are exchanged in the rule expressions below). The split quality, a positive number. It is used to choose the best split. Index of the next split in the list of splits for the node The threshold value in case of split on an ordered variable. Offset of the bitset used by the split on a categorical variable. Sample types each training sample is a row of samples each training sample occupies a column of samples K nearest neighbors classifier Creates instance by raw pointer cv::ml::KNearest* Creates the empty model Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources Default number of neighbors to use in predict method. Whether classification or regression model should be trained. Parameter for KDTree implementation Algorithm type, one of KNearest::Types. Finds the neighbors and predicts responses for input vectors. Input samples stored by rows. It is a single-precision floating-point matrix of `[number_of_samples] * k` size. Number of used nearest neighbors. Should be greater than 1. Vector with results of prediction (regression or classification) for each input sample. It is a single-precision floating-point vector with `[number_of_samples]` elements. neighborResponses Optional output values for corresponding neighbors. It is a single-precision floating-point matrix of `[number_of_samples] * k` size. Optional output distances from the input vectors to the corresponding neighbors. It is a single-precision floating-point matrix of `[number_of_samples] * k` size. Implementations of KNearest algorithm Implements Logistic Regression classifier. Creates instance by raw pointer cv::ml::LogisticRegression* Creates the empty model. Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources Learning rate Number of iterations. Kind of regularization to be applied. See LogisticRegression::RegKinds. Kind of training method used. See LogisticRegression::Methods. Specifies the number of training samples taken in each step of Mini-Batch Gradient. Descent. Will only be used if using LogisticRegression::MINI_BATCH training algorithm. It has to take values less than the total number of training samples. Termination criteria of the training algorithm. Predicts responses for input samples and returns a float type. The input data for the prediction algorithm. Matrix [m x n], where each row contains variables (features) of one object being classified. Should have data type CV_32F. Predicted labels as a column matrix of type CV_32S. Not used. This function returns the trained paramters arranged across rows. For a two class classifcation problem, it returns a row matrix. It returns learnt paramters of the Logistic Regression as a matrix of type CV_32F. Regularization kinds Regularization disabled L1 norm L2 norm Training methods Set MiniBatchSize to a positive integer when using this method. Bayes classifier for normally distributed data Creates instance by raw pointer cv::ml::NormalBayesClassifier* Creates empty model. Use StatModel::train to train the model after creation. Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources Predicts the response for sample(s). The method estimates the most probable classes for input vectors. Input vectors (one or more) are stored as rows of the matrix inputs. In case of multiple input vectors, there should be one output vector outputs. The predicted class for a single input vector is returned by the method. The vector outputProbs contains the output probabilities corresponding to each element of result. The structure represents the logarithmic grid range of statmodel parameters. Minimum value of the statmodel parameter. Default value is 0. Maximum value of the statmodel parameter. Default value is 0. Logarithmic step for iterating the statmodel parameter. The grid determines the following iteration sequence of the statmodel parameter values: \f[(minVal, minVal*step, minVal*{step}^2, \dots, minVal*{logStep}^n),\f] where \f$n\f$ is the maximal index satisfying \f[\texttt{minVal} * \texttt{logStep} ^n < \texttt{maxVal}\f] The grid is logarithmic, so logStep must always be greater then 1. Default value is 1. Constructor with parameters The class implements the random forest predictor. Creates instance by raw pointer cv::ml::RTrees* Creates the empty model. Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources If true then variable importance will be calculated and then it can be retrieved by RTrees::getVarImportance. Default value is false. The size of the randomly selected subset of features at each tree node and that are used to find the best split(s). The termination criteria that specifies when the training algorithm stops. Returns the variable importance array. The method returns the variable importance vector, computed at the training stage when CalculateVarImportance is set to true. If this flag was set to false, the empty matrix is returned. Base class for statistical models in ML Default constructor Returns the number of variables in training samples Returns true if the model is trained Returns true if the model is classifier Trains the statistical model training data that can be loaded from file using TrainData::loadFromCSV or created with TrainData::create. optional flags, depending on the model. Some of the models can be updated with the new training samples, not completely overwritten (such as NormalBayesClassifier or ANN_MLP). Trains the statistical model training samples SampleTypes value vector of responses associated with the training samples. Computes error on the training or test dataset the training data if true, the error is computed over the test subset of the data, otherwise it's computed over the training subset of the data. Please note that if you loaded a completely different dataset to evaluate already trained classifier, you will probably want not to set the test subset at all with TrainData::setTrainTestSplitRatio and specify test=false, so that the error is computed for the whole new set. Yes, this sounds a bit confusing. the optional output responses. Predicts response(s) for the provided sample(s) The input samples, floating-point matrix The optional output matrix of results. The optional flags, model-dependent. Predict options makes the method return the raw results (the sum), not the class label Support Vector Machines Creates instance by raw pointer cv::ml::SVM* Creates empty model. Use StatModel::Train to train the model. Since %SVM has several parameters, you may want to find the best parameters for your problem, it can be done with SVM::TrainAuto. Loads and creates a serialized svm from a file. Use SVM::save to serialize and store an SVM to disk. Load the SVM from this file again, by calling this function with the path to the file. Loads algorithm from a String. The string variable containing the model you want to load. Releases managed resources Type of a %SVM formulation. Default value is SVM::C_SVC. Parameter gamma of a kernel function. For SVM::POLY, SVM::RBF, SVM::SIGMOID or SVM::CHI2. Default value is 1. Parameter coef0 of a kernel function. For SVM::POLY or SVM::SIGMOID. Default value is 0. Parameter degree of a kernel function. For SVM::POLY. Default value is 0. Parameter C of a %SVM optimization problem. For SVM::C_SVC, SVM::EPS_SVR or SVM::NU_SVR. Default value is 0. Parameter nu of a %SVM optimization problem. For SVM::NU_SVC, SVM::ONE_CLASS or SVM::NU_SVR. Default value is 0. Parameter epsilon of a %SVM optimization problem. For SVM::EPS_SVR. Default value is 0. Optional weights in the SVM::C_SVC problem, assigned to particular classes. They are multiplied by _C_ so the parameter _C_ of class _i_ becomes `classWeights(i) * C`. Thus these weights affect the misclassification penalty for different classes. The larger weight, the larger penalty on misclassification of data from the corresponding class. Default value is empty Mat. Termination criteria of the iterative SVM training procedure which solves a partial case of constrained quadratic optimization problem. You can specify tolerance and/or the maximum number of iterations. Default value is `TermCriteria( TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, FLT_EPSILON )`; Type of a %SVM kernel. See SVM::KernelTypes. Default value is SVM::RBF. Initialize with custom kernel. Trains an %SVM with optimal parameters. the training data that can be constructed using TrainData::create or TrainData::loadFromCSV. Cross-validation parameter. The training set is divided into kFold subsets. One subset is used to test the model, the others form the train set. So, the %SVM algorithm is executed kFold times. grid for C grid for gamma grid for p grid for nu grid for coeff grid for degree If true and the problem is 2-class classification then the method creates more balanced cross-validation subsets that is proportions between classes in subsets are close to such proportion in the whole train dataset. Retrieves all the support vectors Retrieves the decision function i the index of the decision function. If the problem solved is regression, 1-class or 2-class classification, then there will be just one decision function and the index should always be 0. Otherwise, in the case of N-class classification, there will be N(N-1)/2 decision functions. alpha the optional output vector for weights, corresponding to different support vectors. In the case of linear %SVM all the alpha's will be 1's. the optional output vector of indices of support vectors within the matrix of support vectors (which can be retrieved by SVM::getSupportVectors). In the case of linear %SVM each decision function consists of a single "compressed" support vector. Generates a grid for SVM parameters. SVM parameters IDs that must be one of the SVM::ParamTypes. The grid is generated for the parameter with this ID. SVM type C-Support Vector Classification. n-class classification (n \f$\geq\f$ 2), allows imperfect separation of classes with penalty multiplier C for outliers. nu-Support Vector Classification. n-class classification with possible imperfect separation. Parameter \f$\nu\f$ (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of C. Distribution Estimation (One-class %SVM). All the training data are from the same class, %SVM builds a boundary that separates the class from the rest of the feature space. epsilon-Support Vector Regression. The distance between feature vectors from the training set and the fitting hyper-plane must be less than p. For outliers the penalty multiplier C is used. nu-Support Vector Regression. \f$\nu\f$ is used instead of p. See @cite LibSVM for details. SVM kernel type Returned by SVM::getKernelType in case when custom kernel has been set Linear kernel. No mapping is done, linear discrimination (or regression) is done in the original feature space. It is the fastest option. \f$K(x_i, x_j) = x_i^T x_j\f$. Polynomial kernel: \f$K(x_i, x_j) = (\gamma x_i^T x_j + coef0)^{degree}, \gamma > 0\f$. Radial basis function (RBF), a good choice in most cases. \f$K(x_i, x_j) = e^{-\gamma ||x_i - x_j||^2}, \gamma > 0\f$. Sigmoid kernel: \f$K(x_i, x_j) = \tanh(\gamma x_i^T x_j + coef0)\f$. Exponential Chi2 kernel, similar to the RBF kernel: \f$K(x_i, x_j) = e^{-\gamma \chi^2(x_i,x_j)}, \chi^2(x_i,x_j) = (x_i-x_j)^2/(x_i+x_j), \gamma > 0\f$. Histogram intersection kernel. A fast kernel. \f$K(x_i, x_j) = min(x_i,x_j)\f$. SVM params type The class implements the Expectation Maximization algorithm. Creates instance by pointer cv::Ptr<EM> Creates empty EM model. Loads and creates a serialized model from a file. Loads algorithm from a String. he string variable containing the model you want to load. Releases managed resources The number of mixture components in the Gaussian mixture model. Default value of the parameter is EM::DEFAULT_NCLUSTERS=5. Some of EM implementation could determine the optimal number of mixtures within a specified value range, but that is not the case in ML yet. Constraint on covariance matrices which defines type of matrices. The termination criteria of the %EM algorithm. The EM algorithm can be terminated by the number of iterations termCrit.maxCount (number of M-steps) or when relative change of likelihood logarithm is less than termCrit.epsilon. Default maximum number of iterations is EM::DEFAULT_MAX_ITERS=100. Returns weights of the mixtures. Returns vector with the number of elements equal to the number of mixtures. Returns the cluster centers (means of the Gaussian mixture). Returns matrix with the number of rows equal to the number of mixtures and number of columns equal to the space dimensionality. Returns covariation matrices. Returns vector of covariation matrices. Number of matrices is the number of gaussian mixtures, each matrix is a square floating-point matrix NxN, where N is the space dimensionality. Estimates Gaussian mixture parameters from the sample set Estimates Gaussian mixture parameters from the sample set Estimates Gaussian mixture parameters from the sample set Predicts the response for sample Type of covariation matrices A scaled identity matrix \f$\mu_k * I\f$. There is the only parameter \f$\mu_k\f$ to be estimated for each matrix. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (for example in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with covMatType=EM::COV_MAT_DIAGONAL. A diagonal matrix with positive diagonal elements. The number of free parameters is d for each matrix. This is most commonly used option yielding good estimation results. A symmetric positively defined matrix. The number of free parameters in each matrix is about \f$d^2/2\f$. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples. The initial step the algorithm starts from The algorithm starts with E-step. At least, the initial values of mean vectors, CvEMParams.Means must be passed. Optionally, the user may also provide initial values for weights (CvEMParams.Weights) and/or covariation matrices (CvEMParams.Covs). [CvEM::START_E_STEP] The algorithm starts with M-step. The initial probabilities p_i,k must be provided. [CvEM::START_M_STEP] No values are required from the user, k-means algorithm is used to estimate initial mixtures parameters. [CvEM::START_AUTO_STEP] Cascade classifier class for object detection. Default constructor Loads a classifier from a file. Name of the file from which the classifier is loaded. Releases unmanaged resources Checks whether the classifier has been loaded. Loads a classifier from a file. Name of the file from which the classifier is loaded. The file may contain an old HAAR classifier trained by the haartraining application or a new cascade classifier trained by the traincascade application. Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles. Matrix of the type CV_8U containing an image where objects are detected. Parameter specifying how much the image size is reduced at each image scale. Parameter specifying how many neighbors each candidate rectangle should have to retain it. Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade. Minimum possible object size. Objects smaller than that are ignored. Maximum possible object size. Objects larger than that are ignored. Vector of rectangles where each rectangle contains the detected object. Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles. Matrix of the type CV_8U containing an image where objects are detected. Parameter specifying how much the image size is reduced at each image scale. Parameter specifying how many neighbors each candidate rectangle should have to retain it. Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade. Minimum possible object size. Objects smaller than that are ignored. Maximum possible object size. Objects larger than that are ignored. Vector of rectangles where each rectangle contains the detected object. Modes of operation for cvHaarDetectObjects If it is set, the function uses Canny edge detector to reject some image regions that contain too few or too much edges and thus can not contain the searched object. The particular threshold values are tuned for face detection and in this case the pruning speeds up the processing. [CV_HAAR_DO_CANNY_PRUNING] For each scale factor used the function will downscale the image rather than "zoom" the feature coordinates in the classifier cascade. Currently, the option can only be used alone, i.e. the flag can not be set together with the others. [CV_HAAR_SCALE_IMAGE] If it is set, the function finds the largest object (if any) in the image. That is, the output sequence will contain one (or zero) element(s). [CV_HAAR_FIND_BIGGEST_OBJECT] It should be used only when FindBiggestObject is set and min_neighbors > 0. If the flag is set, the function does not look for candidates of a smaller size as soon as it has found the object (with enough neighbor candidates) at the current scale. Typically, when min_neighbors is fixed, the mode yields less accurate (a bit larger) object rectangle than the regular single-object mode (flags=FindBiggestObject), but it is much faster, up to an order of magnitude. A greater value of min_neighbors may be specified to improve the accuracy. [CV_HAAR_DO_ROUGH_SEARCH] [HOGDescriptor::L2Hys] HOG (Histogram-of-Oriented-Gradients) Descriptor and Object Detector sizeof(HOGDescriptor) Returns coefficients of the classifier trained for people detection (for default window size). This field returns 1981 SVM coeffs obtained from daimler's base. To use these coeffs the detection window size should be (48,96) Default constructor Creates the HOG descriptor and detector. Detection window size. Align to block size and block stride. Block size in pixels. Align to cell size. Only (16,16) is supported for now. Block stride. It must be a multiple of cell size. Cell size. Only (8, 8) is supported for now. Number of bins. Only 9 bins per cell are supported for now. Gaussian smoothing window parameter. L2-Hys normalization method shrinkage. Flag to specify whether the gamma correction preprocessing is required or not. Maximum number of detection window increases. Initializes from pointer class HOGDescriptor* Releases unmanaged resources Returns coefficients of the classifier trained for people detection (for default window size). This method returns 1981 SVM coeffs obtained from daimler's base. To use these coeffs the detection window size should be (48,96) Performs object detection without a multi-scale window. Source image. CV_8UC1 and CV_8UC4 types are supported for now. Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specfied in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here. Window stride. It must be a multiple of block stride. Mock parameter to keep the CPU interface compatibility. It must be (0,0). Left-top corner points of detected objects boundaries. Performs object detection without a multi-scale window. Source image. CV_8UC1 and CV_8UC4 types are supported for now. Threshold for the distance between features and SVM classifying plane. Usually it is 0 and should be specfied in the detector coefficients (as the last free coefficient). But if the free coefficient is omitted (which is allowed), you can specify it manually here. Window stride. It must be a multiple of block stride. Mock parameter to keep the CPU interface compatibility. It must be (0,0). Left-top corner points of detected objects boundaries. Performs object detection with a multi-scale window. Source image. CV_8UC1 and CV_8UC4 types are supported for now. Threshold for the distance between features and SVM classifying plane. Window stride. It must be a multiple of block stride. Mock parameter to keep the CPU interface compatibility. It must be (0,0). Coefficient of the detection window increase. Coefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping. Detected objects boundaries. Performs object detection with a multi-scale window. Source image. CV_8UC1 and CV_8UC4 types are supported for now. Threshold for the distance between features and SVM classifying plane. Window stride. It must be a multiple of block stride. Mock parameter to keep the CPU interface compatibility. It must be (0,0). Coefficient of the detection window increase. Coefficient to regulate the similarity threshold. When detected, some objects can be covered by many rectangles. 0 means not to perform grouping. Detected objects boundaries. evaluate specified ROI and return confidence value for each location evaluate specified ROI and return confidence value for each location in multiple scales Groups the object candidate rectangles. Input/output vector of rectangles. Output vector includes retained and grouped rectangles. (The Python list is not modified in place.) Input/output vector of weights of rectangles. Output vector includes weights of retained and grouped rectangles. (The Python list is not modified in place.) Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it. Relative difference between sides of the rectangles to merge them into a group. struct for detection region of interest (ROI) scale(size) of the bounding box set of requrested locations to be evaluated vector that will contain confidence values for each location Find rectangular regions in the given image that are likely to contain objects and corresponding confidence levels Structure contains the detection information. bounding box for a detected object confidence level class (model or detector) ID that detect an object Default constructor Creates the HOG descriptor and detector. A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/. A set of trained models names. If it’s empty then the name of each model will be constructed from the name of file containing the model. E.g. the model stored in "/home/user/cat.xml" will get the name "cat". Releases unmanaged resources Clear all trained models and their names stored in an class object. A set of filenames storing the trained detectors (models). Each file contains one model. See examples of such files here /opencv_extra/testdata/cv/latentsvmdetector/models_VOC2007/. A set of trained models names. If it’s empty then the name of each model will be constructed from the name of file containing the model. E.g. the model stored in "/home/user/cat.xml" will get the name "cat". Find rectangular regions in the given image that are likely to contain objects of loaded classes (models) and corresponding confidence levels. An image. Threshold for the non-maximum suppression algorithm. Number of threads used in parallel version of the algorithm. The detections: rectangulars, scores and class IDs. Return the class (model) names that were passed in constructor or method load or extracted from models filenames in those methods. Return a count of loaded models (classes). Releases unmanaged resources sets the epsilon used during the horizontal scan of QR code stop marker detection. Epsilon neighborhood, which allows you to determine the horizontal pattern of the scheme 1:1:3:1:1 according to QR code standard. sets the epsilon used during the vertical scan of QR code stop marker detection. Epsilon neighborhood, which allows you to determine the vertical pattern of the scheme 1:1:3:1:1 according to QR code standard. Detects QR code in image and returns the quadrangle containing the code. grayscale or color (BGR) image containing (or not) QR code. Output vector of vertices of the minimum-area quadrangle containing the code. Decodes QR code in image once it's found by the detect() method. Returns UTF8-encoded output string or empty string if the code cannot be decoded. grayscale or color (BGR) image containing QR code. Quadrangle vertices found by detect() method (or some other algorithm). The optional output image containing rectified and binarized QR code Both detects and decodes QR code grayscale or color (BGR) image containing QR code. opiotnal output array of vertices of the found QR code quadrangle. Will be empty if not found. The optional output image containing rectified and binarized QR code Class for grouping object candidates, detected by Cascade Classifier, HOG etc. instance of the class is to be passed to cv::partition (see cxoperations.hpp) cv::optflow functions Updates motion history image using the current silhouette Silhouette mask that has non-zero pixels where the motion occurs. Motion history image that is updated by the function (single-channel, 32-bit floating-point). Current time in milliseconds or other units. Maximal duration of the motion track in the same units as timestamp . Computes the motion gradient orientation image from the motion history image Motion history single-channel floating-point image. Output mask image that has the type CV_8UC1 and the same size as mhi. Its non-zero elements mark pixels where the motion gradient data is correct. Output motion gradient orientation image that has the same type and the same size as mhi. Each pixel of the image is a motion orientation, from 0 to 360 degrees. Minimal (or maximal) allowed difference between mhi values within a pixel neighborhood. Maximal (or minimal) allowed difference between mhi values within a pixel neighborhood. That is, the function finds the minimum ( m(x,y) ) and maximum ( M(x,y) ) mhi values over 3x3 neighborhood of each pixel and marks the motion orientation at (x, y) as valid only if: min(delta1, delta2) <= M(x,y)-m(x,y) <= max(delta1, delta2). Computes the global orientation of the selected motion history image part Motion gradient orientation image calculated by the function CalcMotionGradient() . Mask image. It may be a conjunction of a valid gradient mask, also calculated by CalcMotionGradient() , and the mask of a region whose direction needs to be calculated. Motion history image calculated by UpdateMotionHistory() . Timestamp passed to UpdateMotionHistory() . Maximum duration of a motion track in milliseconds, passed to UpdateMotionHistory() . Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand). Motion history image. Image where the found mask should be stored, single-channel, 32-bit floating-point. Vector containing ROIs of motion connected components. Current time in milliseconds or other units. Segmentation threshold that is recommended to be equal to the interval between motion history “steps” or greater. computes dense optical flow using Simple Flow algorithm First 8-bit 3-channel image. Second 8-bit 3-channel image Estimated flow Number of layers Size of block through which we sum up when calculate cost function for pixel maximal flow that we search at each level computes dense optical flow using Simple Flow algorithm First 8-bit 3-channel image. Second 8-bit 3-channel image Estimated flow Number of layers Size of block through which we sum up when calculate cost function for pixel maximal flow that we search at each level vector smooth spatial sigma parameter vector smooth color sigma parameter window size for postprocess cross bilateral filter spatial sigma for postprocess cross bilateralf filter color sigma for postprocess cross bilateral filter threshold for detecting occlusions window size for bilateral upscale operation spatial sigma for bilateral upscale operation color sigma for bilateral upscale operation threshold to detect point with irregular flow - where flow should be recalculated after upscale The base class for camera response calibration algorithms. Recovers inverse camera response. vector of input images 256x1 matrix with inverse camera response function vector of exposure time values for each image The base class for camera response calibration algorithms. Creates instance by raw pointer cv::ml::Boost* Creates the empty model. number of pixel locations to use smoothness term weight. Greater values produce smoother results, but can alter the response. if true sample pixel locations are chosen at random, otherwise the form a rectangular grid. Releases managed resources Edge preserving filters The inpainting method Navier-Stokes based method. [CV_INPAINT_NS] The method by Alexandru Telea [CV_INPAINT_TELEA] SeamlessClone method The power of the method is fully expressed when inserting objects with complex outlines into a new background. The classic method, color-based selection and alpha masking might be time consuming and often leaves an undesirable halo. Seamless cloning, even averaged with the original image, is not effective. Mixed seamless cloning based on a loose selection proves effective. Feature exchange allows the user to easily replace certain features of one object by alternative features. A simple Hausdorff distance measure between shapes defined by contours according to the paper "Comparing Images using the Hausdorff distance." by D.P. Huttenlocher, G.A. Klanderman, and W.J. Rucklidge. (PAMI 1993). : Complete constructor Flag indicating which norm is used to compute the Hausdorff distance (NORM_L1, NORM_L2). fractional value (between 0 and 1). Releases managed resources Flag indicating which norm is used to compute the Hausdorff distance (NORM_L1, NORM_L2). fractional value (between 0 and 1). Implementation of the Shape Context descriptor and matching algorithm proposed by Belongie et al. in "Shape Matching and Object Recognition Using Shape Contexts" (PAMI2002). This implementation is packaged in a generic scheme, in order to allow you the implementation of the common variations of the original pipeline. Complete constructor The number of angular bins in the shape context descriptor. The number of radial bins in the shape context descriptor. The value of the inner radius. The value of the outer radius. Releases managed resources The number of angular bins in the shape context descriptor. The number of radial bins in the shape context descriptor. The value of the inner radius. The value of the outer radius. The weight of the shape context distance in the final distance value. The weight of the appearance cost in the final distance value. The weight of the Bending Energy in the final distance value. The value of the standard deviation for the Gaussian window for the image appearance cost. Set the images that correspond to each shape. This images are used in the calculation of the Image Appearance cost. Image corresponding to the shape defined by contours1. Image corresponding to the shape defined by contours2. Get the images that correspond to each shape. This images are used in the calculation of the Image Appearance cost. Image corresponding to the shape defined by contours1. Image corresponding to the shape defined by contours2. Abstract base class for shape distance algorithms. Compute the shape distance between two shapes defined by its contours. Contour defining first shape. Contour defining second shape. High level image stitcher. It's possible to use this class without being aware of the entire stitching pipeline. However, to be able to achieve higher stitching stability and quality of the final images at least being familiar with the theory is recommended Status code Mode for creating photo panoramas. Expects images under perspective transformation and projects resulting pano to sphere. Mode for composing scans. Expects images under affine transformation does not compensate exposure by default. Constructor cv::Stitcher* Creates a Stitcher configured in one of the stitching modes. Scenario for stitcher operation. This is usually determined by source of images to stitch and their transformation.Default parameters will be chosen for operation in given scenario. Releases managed resources Try to stitch the given images. Input images. Final pano. Status code. Try to stitch the given images. Input images. Final pano. Status code. Try to stitch the given images. Input images. Region of interest rectangles. Final pano. Status code. Try to stitch the given images. Input images. Region of interest rectangles. Final pano. Status code. Clear all inner buffers. Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Creates instance from raw pointer T* Releases managed resources Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Creates instance from raw pointer T* Releases managed resources Base class for Super Resolution algorithms. Create Bilateral TV-L1 Super Resolution. Create Bilateral TV-L1 Super Resolution. Create Bilateral TV-L1 Super Resolution. Set input frame source for Super Resolution algorithm. Input frame source Process next frame from input and return output result. Output result Clear all inner buffers. class for defined Super Resolution algorithm. Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Creates instance from raw pointer T* Releases managed resources base class BaseOCR declares a common API that would be used in a typical text recognition scenario Recognize text using the tesseract-ocr API. Takes image on input and returns recognized text in the output_text parameter. Optionallyprovides also the Rects for individual text elements found(e.g.words), and the list of those text elements with their confidence values. Constructor Creates an instance of the OCRTesseract class. Initializes Tesseract. datapath the name of the parent directory of tessdata ended with "/", or null to use the system's default directory. an ISO 639-3 code or NULL will default to "eng". specifies the list of characters used for recognition. null defaults to "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ". tesseract-ocr offers different OCR Engine Modes (OEM), by deffault tesseract::OEM_DEFAULT is used.See the tesseract-ocr API documentation for other possible values. tesseract-ocr offers different Page Segmentation Modes (PSM) tesseract::PSM_AUTO (fully automatic layout analysis) is used. See the tesseract-ocr API documentation for other possible values. Releases managed resources Recognize text using the tesseract-ocr API. Takes image on input and returns recognized text in the output_text parameter. Optionally provides also the Rects for individual text elements found(e.g.words), and the list of those text elements with their confidence values. Input image CV_8UC1 or CV_8UC3 Output text of the tesseract-ocr. If provided the method will output a list of Rects for the individual text elements found(e.g.words or text lines). If provided the method will output a list of text strings for the recognition of individual text elements found(e.g.words or text lines). If provided the method will output a list of confidence values for the recognition of individual text elements found(e.g.words or text lines). OCR_LEVEL_WORD (by default), or OCR_LEVEL_TEXT_LINE. Recognize text using the tesseract-ocr API. Takes image on input and returns recognized text in the output_text parameter. Optionally provides also the Rects for individual text elements found(e.g.words), and the list of those text elements with their confidence values. Input image CV_8UC1 or CV_8UC3 Output text of the tesseract-ocr. If provided the method will output a list of Rects for the individual text elements found(e.g.words or text lines). If provided the method will output a list of text strings for the recognition of individual text elements found(e.g.words or text lines). If provided the method will output a list of confidence values for the recognition of individual text elements found(e.g.words or text lines). OCR_LEVEL_WORD (by default), or OCR_LEVEL_TEXT_LINE. This class is used to track multiple objects using the specified tracker algorithm. The MultiTracker is naive implementation of multiple object tracking. It process the tracked objects independently without any optimization accross the tracked objects. cv::Ptr<T> Constructor Releases managed resources Add a new object to be tracked. tracking algorithm to be used input image a rectangle represents ROI of the tracked object Add a set of objects to be tracked. list of tracking algorithms to be used input image list of the tracked objects Update the current tracking status. The result will be saved in the internal storage. input image Update the current tracking status. input image the tracking result, represent a list of ROIs of the tracked objects. Returns a reference to a storage for the tracked objects, each object corresponds to one tracker algorithm Base abstract class for the long-term tracker Releases managed resources Initialize the tracker with a know bounding box that surrounding the target The initial frame The initial boundig box Update the tracker, find the new most likely bounding box for the target The current frame The boundig box that represent the new target location, if true was returned, not modified otherwise True means that target was located and false means that tracker cannot locate target in current frame.Note, that latter *does not* imply that tracker has failed, maybe target is indeed missing from the frame (say, out of sight) This is a real-time object tracking based on a novel on-line version of the AdaBoost algorithm. The classifier uses the surrounding background as negative examples in update step to avoid the drifting problem.The implementation is based on @cite OLB. Constructor Constructor BOOSTING parameters the number of classifiers to use in a OnlineBoosting algorithm search region parameters to use in a OnlineBoosting algorithm search region parameters to use in a OnlineBoosting algorithm the initial iterations # features GOTURN (@cite GOTURN) is kind of trackers based on Convolutional Neural Networks (CNN). * While taking all advantages of CNN trackers, GOTURN is much faster due to offline training without online fine-tuning nature. * GOTURN tracker addresses the problem of single target tracking: given a bounding box label of an object in the first frame of the video, * we track that object through the rest of the video.NOTE: Current method of GOTURN does not handle occlusions; however, it is fairly * robust to viewpoint changes, lighting changes, and deformations. * Inputs of GOTURN are two RGB patches representing Target and Search patches resized to 227x227. * Outputs of GOTURN are predicted bounding box coordinates, relative to Search patch coordinate system, in format X1, Y1, X2, Y2. * Original paper is here: [http://davheld.github.io/GOTURN/GOTURN.pdf] * As long as original authors implementation: [https://github.com/davheld/GOTURN#train-the-tracker] * Implementation of training algorithm is placed in separately here due to 3d-party dependencies: * [https://github.com/Auron-X/GOTURN_Training_Toolkit] * GOTURN architecture goturn.prototxt and trained model goturn.caffemodel are accessible on opencv_extra GitHub repository. Constructor Constructor GOTURN parameters KCF is a novel tracking framework that utilizes properties of circulant matrix to enhance the processing speed. * This tracking method is an implementation of @cite KCF_ECCV which is extended to KFC with color-names features(@cite KCF_CN). * The original paper of KCF is available at [http://www.robots.ox.ac.uk/~joao/publications/henriques_tpami2015.pdf] * as well as the matlab implementation.For more information about KCF with color-names features, please refer to * [http://www.cvl.isy.liu.se/research/objrec/visualtracking/colvistrack/index.html]. Constructor Constructor KCF parameters TrackerKCF::Params Releases managed resources detection confidence threshold gaussian kernel bandwidth regularization linear interpolation factor for adaptation spatial bandwidth (proportional to target) compression learning rate activate the resize feature to improve the processing speed split the training coefficients into two matrices wrap around the kernel values activate the pca method to compress the features threshold for the ROI size feature size after compression compressed descriptors of TrackerKCF::MODE non-compressed descriptors of TrackerKCF::MODE Median Flow tracker implementation. The tracker is suitable for very smooth and predictable movements when object is visible throughout the whole sequence.It's quite and accurate for this type of problems (in particular, it was shown by authors to outperform MIL). During the implementation period the code at [http://www.aonsquared.co.uk/node/5], the courtesy of the author Arthur Amarra, was used for the reference purpose. Constructor Constructor MedianFlow parameters square root of number of keypoints used; increase it to trade accurateness for speed window size parameter for Lucas-Kanade optical flow maximal pyramid level number for Lucas-Kanade optical flow termination criteria for Lucas-Kanade optical flow window size around a point for normalized cross-correlation check criterion for loosing the tracked object The MIL algorithm trains a classifier in an online manner to separate the object from the background. Multiple Instance Learning avoids the drift problem for a robust tracking.The implementation is based on @cite MIL. Original code can be found here [http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml] Constructor Constructor MIL parameters radius for gathering positive instances during init # negative samples to use during init size of search window radius for gathering positive instances during tracking # positive samples to use during tracking # negative samples to use during tracking # features MOSSE tracker. this tracker works with grayscale images, if passed bgr ones, they will get converted internally. Constructor TLD is a novel tracking framework that explicitly decomposes the long-term tracking task into tracking, learning and detection. The tracker follows the object from frame to frame.The detector localizes all appearances that have been observed so far and corrects the tracker if necessary.The learning estimates detectorfs errors and updates it to avoid these errors in the future.The implementation is based on @cite TLD . The Median Flow algorithm (see cv::TrackerMedianFlow) was chosen as a tracking component in this implementation, following authors. Tracker is supposed to be able to handle rapid motions, partial occlusions, object absence etc. Constructor Constructor TLD parameters channel indices for multi-head camera live streams Depth values in mm (CV_16UC1) XYZ in meters (CV_32FC3) Disparity in pixels (CV_8UC1) Disparity in pixels (CV_32FC1) CV_8UC1 Camera device types autodetect platform native platform native platform native IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers IEEE 1394 drivers QuickTime Unicap drivers DirectShow (via videoInput) PvAPI, Prosilica GigE SDK OpenNI (for Kinect) OpenNI (for Asus Xtion) Android XIMEA Camera API AVFoundation framework for iOS (OS X Lion will have the same API) Smartek Giganetix GigEVisionSDK Microsoft Media Foundation (via videoInput) Microsoft Windows Runtime using Media Foundation Intel Perceptual Computing SDK OpenNI2 (for Kinect) OpenNI2 (for Asus Xtion and Occipital Structure sensors) gPhoto2 connection GStreamer Open and record video file or stream using the FFMPEG library OpenCV Image Sequence (e.g. img_%02d.jpg) Aravis SDK Position in relative units Start of the file End of the file Property identifiers for CvCapture Position in milliseconds from the file beginning Position in frames (only for video files) Position in relative units (0 - start of the file, 1 - end of the file) Width of frames in the video stream (only for cameras) Height of frames in the video stream (only for cameras) Frame rate (only for cameras) 4-character code of codec (only for cameras). Number of frames in the video stream The format of the Mat objects returned by retrieve() A backend-specific value indicating the current capture mode Brightness of image (only for cameras) contrast of image (only for cameras) Saturation of image (only for cameras) hue of image (only for cameras) Gain of the image (only for cameras) Exposure (only for cameras) Boolean flags indicating whether images should be converted to RGB TOWRITE (note: only supported by DC1394 v 2.x backend currently) exposure control done by camera, user can adjust refernce level using this feature Pop up video/camera filter dialog (note: only supported by DSHOW backend currently. Property value is ignored) in mm in mm in pixels flag that synchronizes the remapping depth map to image map by changing depth generator's view point (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). default is 1 ip for anable multicast master mode. 0 for disable multicast Determines how a frame is initiated Horizontal sub-sampling of the image Vertical sub-sampling of the image Horizontal binning factor Vertical binning factor Pixel format Change image resolution by binning or skipping. Output data format. Horizontal offset from the origin to the area of interest (in pixels). Vertical offset from the origin to the area of interest (in pixels). Defines source of trigger. Generates an internal trigger. PRM_TRG_SOURCE must be set to TRG_SOFTWARE. Selects general purpose input Set general purpose input mode Get general purpose level Selects general purpose output Set general purpose output mode Selects camera signalling LED Define camera signalling LED functionality Calculates White Balance(must be called during acquisition) Automatic white balance Automatic exposure/gain Exposure priority (0.5 - exposure 50%, gain 50%). Maximum limit of exposure in AEAG procedure Maximum limit of gain in AEAG procedure Average intensity of output signal AEAG should achieve(in %) Image capture timeout in milliseconds Capture only preview from liveview mode. Readonly, returns (const char *). Trigger, only by set. Reload camera settings. Reload all settings on set. Collect messages with details. Readonly, returns (const char *). Exposure speed. Can be readonly, depends on camera program. Aperture. Can be readonly, depends on camera program. Camera exposure program. Enter liveview mode. Capture type of CvCapture (Camera or AVI file) Captures from an AVI file Captures from digital camera 4-character code of codec used to compress the frames. Video capturing class Capture type (File or Camera) Initializes empty capture. To use this, you should call Open. Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed. Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Device type Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Device type Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed. Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed. Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Device type Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Device type Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed. Allocates and initialized the CvCapture structure for reading the video stream from the specified file. After the allocated structure is not used any more it should be released by cvReleaseCapture function. Name of the video file. Allocates and initialized the CvCapture structure for reading the video stream from the specified file. After the allocated structure is not used any more it should be released by cvReleaseCapture function. Name of the video file. Initializes from native pointer CvCapture* Releases unmanaged resources Gets the capture type (File or Camera) Gets or sets film current position in milliseconds or video capture timestamp Gets or sets 0-based index of the frame to be decoded/captured next Gets or sets relative position of video file Gets or sets width of frames in the video stream Gets or sets height of frames in the video stream Gets or sets frame rate Gets or sets 4-character code of codec Gets number of frames in video file Gets or sets brightness of image (only for cameras) Gets or sets contrast of image (only for cameras) Gets or sets saturation of image (only for cameras) Gets or sets hue of image (only for cameras) The format of the Mat objects returned by retrieve() A backend-specific value indicating the current capture mode Gain of the image (only for cameras) Exposure (only for cameras) Boolean flags indicating whether images should be converted to RGB TOWRITE (note: only supported by DC1394 v 2.x backend currently) exposure control done by camera, user can adjust refernce level using this feature [CV_CAP_PROP_AUTO_EXPOSURE] [CV_CAP_PROP_TEMPERATURE] [CV_CAP_PROP_OPENNI_OUTPUT_MODE] in mm [CV_CAP_PROP_OPENNI_FRAME_MAX_DEPTH] in mm [CV_CAP_PROP_OPENNI_BASELINE] in pixels [CV_CAP_PROP_OPENNI_FOCAL_LENGTH] flag that synchronizes the remapping depth map to image map by changing depth generator's view point (if the flag is "on") or sets this view point to its normal one (if the flag is "off"). [CV_CAP_PROP_OPENNI_REGISTRATION] [CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE] [CV_CAP_OPENNI_DEPTH_GENERATOR_BASELINE] [CV_CAP_OPENNI_DEPTH_GENERATOR_FOCAL_LENGTH] [CV_CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION_ON] default is 1 [CV_CAP_GSTREAMER_QUEUE_LENGTH] ip for anable multicast master mode. 0 for disable multicast [CV_CAP_PROP_PVAPI_MULTICASTIP] Change image resolution by binning or skipping. [CV_CAP_PROP_XI_DOWNSAMPLING] Output data format. [CV_CAP_PROP_XI_DATA_FORMAT] Horizontal offset from the origin to the area of interest (in pixels). [CV_CAP_PROP_XI_OFFSET_X] Vertical offset from the origin to the area of interest (in pixels). [CV_CAP_PROP_XI_OFFSET_Y] Defines source of trigger. [CV_CAP_PROP_XI_TRG_SOURCE] Generates an internal trigger. PRM_TRG_SOURCE must be set to TRG_SOFTWARE. [CV_CAP_PROP_XI_TRG_SOFTWARE] Selects general purpose input [CV_CAP_PROP_XI_GPI_SELECTOR] Set general purpose input mode [CV_CAP_PROP_XI_GPI_MODE] Get general purpose level [CV_CAP_PROP_XI_GPI_LEVEL] Selects general purpose output [CV_CAP_PROP_XI_GPO_SELECTOR] Set general purpose output mode [CV_CAP_PROP_XI_GPO_MODE] Selects camera signalling LED [CV_CAP_PROP_XI_LED_SELECTOR] Define camera signalling LED functionality [CV_CAP_PROP_XI_LED_MODE] Calculates White Balance(must be called during acquisition) [CV_CAP_PROP_XI_MANUAL_WB] Automatic white balance [CV_CAP_PROP_XI_AUTO_WB] Automatic exposure/gain [CV_CAP_PROP_XI_AEAG] Exposure priority (0.5 - exposure 50%, gain 50%). [CV_CAP_PROP_XI_EXP_PRIORITY] Maximum limit of exposure in AEAG procedure [CV_CAP_PROP_XI_AE_MAX_LIMIT] Maximum limit of gain in AEAG procedure [CV_CAP_PROP_XI_AG_MAX_LIMIT] default is 1 [CV_CAP_PROP_XI_AEAG_LEVEL] default is 1 [CV_CAP_PROP_XI_TIMEOUT] Retrieves the specified property of camera or video file. property identifier. property value Retrieves the specified property of camera or video file. property identifier. property value Grabs the frame from camera or file. The grabbed frame is stored internally. The purpose of this function is to grab frame fast that is important for syncronization in case of reading from several cameras simultaneously. The grabbed frames are not exposed because they may be stored in compressed format (as defined by camera/driver). To retrieve the grabbed frame, cvRetrieveFrame should be used. Decodes and returns the grabbed video frame. non-zero streamIdx is only valid for multi-head camera live streams Returns the pointer to the image grabbed with cvGrabFrame function. The returned image should not be released or modified by user. non-zero streamIdx is only valid for multi-head camera live streams Decodes and returns the grabbed video frame. Grabs a frame from camera or video file, decompresses and returns it. This function is just a combination of cvGrabFrame and cvRetrieveFrame in one call. The returned image should not be released or modified by user. Sets the specified property of video capturing. property identifier. value of the property. Sets the specified property of video capturing. property identifier. value of the property. Opens the specified video file Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed. Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Device type Allocates and initialized the CvCapture structure for reading a video stream from the camera. Currently two camera interfaces can be used on Windows: Video for Windows (VFW) and Matrox Imaging Library (MIL); and two on Linux: V4L and FireWire (IEEE1394). Device type Index of the camera to be used. If there is only one camera or it does not matter what camera to use -1 may be passed. Closes video file or capturing device. Returns true if video capturing has been initialized already. For accessing each byte of Int32 value AVI Video File Writer Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, "PIM1" is MPEG-1 codec, "MJPG" is motion-jpeg codec etc. Under Win32 it is possible to pass null in order to choose compression method and additional compression parameters from dialog. Framerate of the created video stream. Size of video frames. If it is true, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only). Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, "PIM1" is MPEG-1 codec, "MJPG" is motion-jpeg codec etc. Under Win32 it is possible to pass null in order to choose compression method and additional compression parameters from dialog. Framerate of the created video stream. Size of video frames. If it is true, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only). Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, "PIM1" is MPEG-1 codec, "MJPG" is motion-jpeg codec etc. Under Win32 it is possible to pass null in order to choose compression method and additional compression parameters from dialog. Framerate of the created video stream. Size of video frames. If it is true, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only). Initializes from native pointer CvVideoWriter* Releases unmanaged resources Get output video file name Frames per second of the output vide Get size of frame image Get whether output frames is color or not Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, "PIM1" is MPEG-1 codec, "MJPG" is motion-jpeg codec etc. Under Win32 it is possible to pass null in order to choose compression method and additional compression parameters from dialog. Framerate of the created video stream. Size of video frames. If it is true, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only). Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, "PIM1" is MPEG-1 codec, "MJPG" is motion-jpeg codec etc. Under Win32 it is possible to pass null in order to choose compression method and additional compression parameters from dialog. Framerate of the created video stream. Size of video frames. If it is true, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only). Creates video writer structure. Name of the output video file. 4-character code of codec used to compress the frames. For example, "PIM1" is MPEG-1 codec, "MJPG" is motion-jpeg codec etc. Under Win32 it is possible to pass null in order to choose compression method and additional compression parameters from dialog. Framerate of the created video stream. Size of video frames. If it is true, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only). Returns true if video writer has been successfully initialized. Writes/appends one frame to video file. the written frame. Concatenates 4 chars to a fourcc code. This static method constructs the fourcc code of the codec to be used in the constructor VideoWriter::VideoWriter or VideoWriter::open. Concatenates 4 chars to a fourcc code. This static method constructs the fourcc code of the codec to be used in the constructor VideoWriter::VideoWriter or VideoWriter::open. The Base Class for Background/Foreground Segmentation. The class is only used to define the common interface for the whole family of background/foreground segmentation algorithms. the update operator that takes the next video frame and returns the current foreground mask as 8-bit binary image. computes a background image K nearest neigbours algorithm cv::Ptr<T> Releases managed resources The Base Class for Background/Foreground Segmentation. The class is only used to define the common interface for the whole family of background/foreground segmentation algorithms. cv::Ptr<T> Releases managed resources Clear all inner buffers. Creates instance from cv::Ptr<T> . ptr is disposed when the wrapper disposes. Creates instance from raw pointer T* Releases managed resources Kalman filter. The class implements standard Kalman filter \url{http://en.wikipedia.org/wiki/Kalman_filter}. However, you can modify KalmanFilter::transitionMatrix, KalmanFilter::controlMatrix and KalmanFilter::measurementMatrix to get the extended Kalman filter functionality. the default constructor the full constructor taking the dimensionality of the state, of the measurement and of the control vector Releases unmanaged resources predicted state (x'(k)): x(k)=A*x(k-1)+B*u(k) corrected state (x(k)): x(k)=x'(k)+K(k)*(z(k)-H*x'(k)) state transition matrix (A) control matrix (B) (not used if there is no control) measurement matrix (H) process noise covariance matrix (Q) measurement noise covariance matrix (R) priori error estimate covariance matrix (P'(k)): P'(k)=A*P(k-1)*At + Q)*/ Kalman gain matrix (K(k)): K(k)=P'(k)*Ht*inv(H*P'(k)*Ht+R) posteriori error estimate covariance matrix (P(k)): P(k)=(I-K(k)*H)*P'(k) re-initializes Kalman filter. The previous content is destroyed. computes predicted state updates the predicted state from the measurement cv::calcOpticalFlowPyrLK flags BRIEF Descriptor cv::Ptr<T> Constructor bytes is a length of descriptor in bytes. It can be equal 16, 32 or 64 bytes. Releases managed resources FREAK implementation Constructor enable orientation normalization enable scale normalization scaling of the description pattern number of octaves covered by the detected keypoints (optional) user defined selected pairs Releases managed resources LATCH Descriptor. latch Class for computing the LATCH descriptor. If you find this code useful, please add a reference to the following paper in your work: Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015. Note: a complete example can be found under /samples/cpp/tutorial_code/xfeatures2D/latch_match.cpp Constructor the size of the descriptor - can be 64, 32, 16, 8, 4, 2 or 1 whether or not the descriptor should compansate for orientation changes. the size of half of the mini-patches size. For example, if we would like to compare triplets of patches of size 7x7x then the half_ssd_size should be (7-1)/2 = 3. sigma value for GaussianBlur smoothing of the source image. Source image will be used without smoothing in case sigma value is 0. Note: the descriptor can be coupled with any keypoint extractor. The only demand is that if you use set rotationInvariance = True then you will have to use an extractor which estimates the patch orientation (in degrees). Examples for such extractors are ORB and SIFT. Releases managed resources Class implementing the locally uniform comparison image descriptor, described in @cite LUCID. An image descriptor that can be computed very fast, while being about as robust as, for example, SURF or BRIEF. @note It requires a color image as input. Constructor kernel for descriptor construction, where 1=3x3, 2=5x5, 3=7x7 and so forth kernel for blurring image prior to descriptor construction, where 1=3x3, 2=5x5, 3=7x7 and so forth Releases managed resources SIFT implementation. Creates instance by raw pointer cv::SIFT* The SIFT constructor. The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast) The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution. The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector. The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained). The sigma of the Gaussian applied to the input image at the octave #0. If your image is captured with a weak camera with soft lenses, you might want to reduce the number. Releases managed resources The "Star" Detector Constructor Releases managed resources Class for extracting Speeded Up Robust Features from an image. Creates instance by raw pointer cv::SURF* The SURF constructor. Only features with keypoint.hessian larger than that are extracted. The number of a gaussian pyramid octaves that the detector uses. It is set to 4 by default. If you want to get very large features, use the larger value. If you want just small features, decrease it. The number of images within each octave of a gaussian pyramid. It is set to 2 by default. false means basic descriptors (64 elements each), true means extended descriptors (128 elements each) false means that detector computes orientation of each feature. true means that the orientation is not computed (which is much, much faster). Releases managed resources Threshold for the keypoint detector. Only features, whose hessian is larger than hessianThreshold are retained by the detector. Therefore, the larger the value, the less keypoints you will get. A good default value could be from 300 to 500, depending from the image contrast. The number of a gaussian pyramid octaves that the detector uses. It is set to 4 by default. If you want to get very large features, use the larger value. If you want just small features, decrease it. The number of images within each octave of a gaussian pyramid. It is set to 2 by default. false means that the basic descriptors (64 elements each) shall be computed. true means that the extended descriptors (128 elements each) shall be computed false means that detector computes orientation of each feature. true means that the orientation is not computed (which is much, much faster). For example, if you match images from a stereo pair, or do image stitching, the matched features likely have very similar angles, and you can speed up feature extraction by setting upright=true. cv::ximgproc functions Strategy for the selective search segmentation algorithm. Create a new color-based strategy Create a new size-based strategy Create a new size-based strategy Create a new fill-based strategy Create a new multiple strategy Create a new multiple strategy and set one subtrategy The first strategy Create a new multiple strategy and set one subtrategy The first strategy The second strategy Create a new multiple strategy and set one subtrategy The first strategy The second strategy The third strategy Create a new multiple strategy and set one subtrategy The first strategy The second strategy The third strategy The forth strategy Applies Niblack thresholding to input image. T(x, y)\)}{0}{otherwise}\f] - ** THRESH_BINARY_INV** \f[dst(x, y) = \fork{0}{if \(src(x, y) > T(x, y)\)}{\texttt{maxValue}}{otherwise}\f] where \f$T(x, y)\f$ is a threshold calculated individually for each pixel. The threshold value \f$T(x, y)\f$ is the mean minus \f$ delta \f$ times standard deviation of \f$\texttt{blockSize} \times\texttt{blockSize}\f$ neighborhood of \f$(x, y)\f$. The function can't process the image in-place. ]]> Source 8-bit single-channel image. Destination image of the same size and the same type as src. Non-zero value assigned to the pixels for which the condition is satisfied, used with the THRESH_BINARY and THRESH_BINARY_INV thresholding types. Thresholding type, see cv::ThresholdTypes. Size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, and so on. Constant multiplied with the standard deviation and subtracted from the mean. Normally, it is taken to be a real number between 0 and 1. Applies a binary blob thinning operation, to achieve a skeletization of the input image. The function transforms a binary blob image into a skeletized form using the technique of Zhang-Suen. Source 8-bit single-channel image, containing binary blobs, with blobs having 255 pixel values. Destination image of the same size and the same type as src. The function can work in-place. Value that defines which thinning algorithm should be used. Performs anisotropic diffusian on an image. The function applies Perona-Malik anisotropic diffusion to an image. Grayscale Source image. Destination image of the same size and the same number of channels as src. The amount of time to step forward by on each iteration (normally, it's between 0 and 1). sensitivity to the edges The number of iterations Creates a smart pointer to a FastLineDetector object and initializes it Segment shorter than this will be discarded A point placed from a hypothesis line segment farther than this will be regarded as an outlier First threshold for hysteresis procedure in Canny() Second threshold for hysteresis procedure in Canny() Aperturesize for the sobel operator in Canny() If true, incremental merging of segments will be perfomred Creates a EdgeBoxes step size of sliding window search. nms threshold for object proposals. adaptation rate for nms threshold. min score of boxes to detect. max number of boxes to detect. edge min magnitude. Increase to trade off accuracy for speed. edge merge threshold. Increase to trade off accuracy for speed. cluster min magnitude. Increase to trade off accuracy for speed. max aspect ratio of boxes. minimum area of boxes. affinity sensitivity. scale sensitivity. Creates a RFFeatureGetter Creates a StructuredEdgeDetection name of the file where the model is stored optional object inheriting from RFFeatureGetter. You need it only if you would like to train your own forest, pass null otherwise Calculates 2D Fast Hough transform of an image. The source (input) image. The destination image, result of transformation. The depth of destination image The part of Hough space to calculate, see cv::AngleRangeOption The operation to be applied, see cv::HoughOp Specifies to do or not to do image skewing, see cv::HoughDeskewOption Calculates coordinates of line segment corresponded by point in Hough space. If rules parameter set to RO_STRICT then returned line cut along the border of source image. If rules parameter set to RO_WEAK then in case of point, which belongs the incorrect part of Hough image, returned line will not intersect source image. Point in Hough space. The source (input) image of Hough transform. The part of Hough space where point is situated, see cv::AngleRangeOption Specifies to do or not to do image skewing, see cv::HoughDeskewOption Specifies strictness of line segment calculating, see cv::RulesOption Coordinates of line segment corresponded by point in Hough space. Applies weighted median filter to an image. For more details about this implementation, please see @cite zhang2014100+ Joint 8-bit, 1-channel or 3-channel image. Source 8-bit or floating-point, 1-channel or 3-channel image. Destination image. Radius of filtering kernel, should be a positive integer. Filter range standard deviation for the joint image. The type of weight definition, see WMFWeightType A 0-1 mask that has the same size with I. This mask is used to ignore the effect of some pixels. If the pixel value on mask is 0, the pixel will be ignored when maintaining the joint-histogram.This is useful for applications like optical flow occlusion handling. Computes the estimated covariance matrix of an image using the sliding window forumlation. The window size parameters control the accuracy of the estimation. The sliding window moves over the entire image from the top-left corner to the bottom right corner.Each location of the window represents a sample. If the window is the size of the image, then this gives the exact covariance matrix. For all other cases, the sizes of the window will impact the number of samples and the number of elements in the estimated covariance matrix. The source image. Input image must be of a complex type. The destination estimated covariance matrix. Output matrix will be size (windowRows*windowCols, windowRows*windowCols). The number of rows in the window. The number of cols in the window. Class implementing EdgeBoxes algorithm from @cite ZitnickECCV14edgeBoxes Creates instance by raw pointer Releases managed resources Creates a EdgeBoxes step size of sliding window search. nms threshold for object proposals. adaptation rate for nms threshold. min score of boxes to detect. max number of boxes to detect. edge min magnitude. Increase to trade off accuracy for speed. edge merge threshold. Increase to trade off accuracy for speed. cluster min magnitude. Increase to trade off accuracy for speed. max aspect ratio of boxes. minimum area of boxes. affinity sensitivity. scale sensitivity. Gets or sets the step size of sliding window search. Gets or sets the nms threshold for object proposals. Gets or sets adaptation rate for nms threshold. Gets or sets the min score of boxes to detect. Gets or sets the max number of boxes to detect. Gets or sets the edge min magnitude. Gets or sets the edge merge threshold. Gets or sets the cluster min magnitude. Gets or sets the max aspect ratio of boxes. Gets or sets the minimum area of boxes. Gets or sets the affinity sensitivity. Gets or sets the scale sensitivity. Returns array containing proposal boxes. edge image. orientation map. proposal boxes. Specifies the part of Hough space to calculate The enum specifies the part of Hough space to calculate. Each member specifies primarily direction of lines(horizontal or vertical) and the direction of angle changes. Direction of angle changes is from multiples of 90 to odd multiples of 45. The image considered to be written top-down and left-to-right. Angles are started from vertical line and go clockwise. Separate quarters and halves are written in orientation they should be in full Hough space. Vertical primarily direction and clockwise angle changes Horizontal primarily direction and counterclockwise angle changes Horizontal primarily direction and clockwise angle changes Vertical primarily direction and counterclockwise angle changes Vertical primarily direction Horizontal primarily direction Full set of directions 90 +/- atan(0.5), interval approximately from 64.5 to 116.5 degrees. It is used for calculating Fast Hough Transform for images skewed by atan(0.5). +/- atan(0.5), interval approximately from 333.5(-26.5) to 26.5 degrees It is used for calculating Fast Hough Transform for images skewed by atan(0.5). Specifies to do or not to do skewing of Hough transform image The enum specifies to do or not to do skewing of Hough transform image so it would be no cycling in Hough transform image through borders of image. Use raw cyclic image Prepare deskewed image Specifies binary operations. The enum specifies binary operations, that is such ones which involve two operands. Formally, a binary operation @f$ f @f$ on a set @f$ S @f$ is a binary relation that maps elements of the Cartesian product @f$ S \times S @f$ to @f$ S @f$: @f[ f: S \times S \to S @f] Binary minimum operation. The constant specifies the binary minimum operation @f$ f @f$ that is defined as follows: @f[ f(x, y) = \min(x, y) @f] Binary maximum operation. The constant specifies the binary maximum operation @f$ f @f$ that is defined as follows: @f[ f(x, y) = \max(x, y) @f] Binary addition operation. The constant specifies the binary addition operation @f$ f @f$ that is defined as follows: @f[ f(x, y) = x + y @f] Binary average operation. The constant specifies the binary average operation @f$ f @f$ that is defined as follows: @f[ f(x, y) = \frac{x + y}{2} @f] Specifies the degree of rules validation. The enum specifies the degree of rules validation. This can be used, for example, to choose a proper way of input arguments validation. Validate each rule in a proper way. Skip validations of image borders. thinning algorithm Thinning technique of Zhang-Suen Thinning technique of Guo-Hall Specifies weight types of weighted median filter. \f$exp(-|I1-I2|^2/(2*sigma^2))\f$ \f$(|I1-I2|+sigma)^-1\f$ \f$(|I1-I2|^2+sigma^2)^-1\f$ \f$dot(I1,I2)/(|I1|*|I2|)\f$ \f$(min(r1,r2)+min(g1,g2)+min(b1,b2))/(max(r1,r2)+max(g1,g2)+max(b1,b2))\f$ unweighted Class implementing the FLD (Fast Line Detector) algorithm described in @cite Lee14. Creates instance by raw pointer Releases managed resources Creates a smart pointer to a FastLineDetector object and initializes it Segment shorter than this will be discarded A point placed from a hypothesis line segment farther than this will be regarded as an outlier First threshold for hysteresis procedure in Canny() Second threshold for hysteresis procedure in Canny() Aperturesize for the sobel operator in Canny() If true, incremental merging of segments will be perfomred Finds lines in the input image. This is the output of the default parameters of the algorithm on the above shown image. A grayscale (CV_8UC1) input image. If only a roi needs to be selected, use: `fld_ptr-\>detect(image(roi), lines, ...); lines += Scalar(roi.x, roi.y, roi.x, roi.y);` A vector of Vec4f elements specifying the beginning and ending point of a line. Where Vec4f is (x1, y1, x2, y2), point 1 is the start, point 2 - end.Returned lines are directed so that the brighter side is on their left. Finds lines in the input image. This is the output of the default parameters of the algorithm on the above shown image. A grayscale (CV_8UC1) input image. If only a roi needs to be selected, use: `fld_ptr-\>detect(image(roi), lines, ...); lines += Scalar(roi.x, roi.y, roi.x, roi.y);` A vector of Vec4f elements specifying the beginning and ending point of a line. Where Vec4f is (x1, y1, x2, y2), point 1 is the start, point 2 - end.Returned lines are directed so that the brighter side is on their left. Draws the line segments on a given image. The image, where the lines will be drawn. Should be bigger or equal to the image, where the lines were found. A vector of the lines that needed to be drawn. If true, arrow heads will be drawn. Draws the line segments on a given image. The image, where the lines will be drawn. Should be bigger or equal to the image, where the lines were found. A vector of the lines that needed to be drawn. If true, arrow heads will be drawn. Helper class for training part of [P. Dollar and C. L. Zitnick. Structured Forests for Fast Edge Detection, 2013]. Creates instance by raw pointer Releases managed resources Creates a RFFeatureGetter Extracts feature channels from src. Than StructureEdgeDetection uses this feature space to detect edges. source image to extract features output n-channel floating point feature matrix. gradientNormalizationRadius gradientSmoothingRadius shrinkNumber numberOfOutputChannels numberOfGradientOrientations Graph Based Segmentation Algorithm. The class implements the algorithm described in @cite PFF2004. Creates instance by raw pointer Releases managed resources Creates a graph based segmentor The sigma parameter, used to smooth image The k parameter of the algorythm The minimum size of segments Segment an image and store output in dst The input image. Any number of channel (1 (Eg: Gray), 3 (Eg: RGB), 4 (Eg: RGB-D)) can be provided The output segmentation. It's a CV_32SC1 Mat with the same number of cols and rows as input image, with an unique, sequential, id for each pixel. Selective search segmentation algorithm. The class implements the algorithm described in @cite uijlings2013selective. Creates instance by raw pointer Releases managed resources Create a new SelectiveSearchSegmentation class. Set a image used by switch* functions to initialize the class The image Initialize the class with the 'Single stragegy' parameters describled in @cite uijlings2013selective. The k parameter for the graph segmentation The sigma parameter for the graph segmentation Initialize the class with the 'Selective search fast' parameters describled in @cite uijlings2013selective. The k parameter for the first graph segmentation The increment of the k parameter for all graph segmentations The sigma parameter for the graph segmentation Initialize the class with the 'Selective search fast' parameters describled in @cite uijlings2013selective. The k parameter for the first graph segmentation The increment of the k parameter for all graph segmentations The sigma parameter for the graph segmentation Add a new image in the list of images to process. The image Clear the list of images to process Add a new graph segmentation in the list of graph segementations to process. The graph segmentation Clear the list of graph segmentations to process Add a new strategy in the list of strategy to process. The strategy Clear the list of strategy to process; Based on all images, graph segmentations and stragies, computes all possible rects and return them The list of rects. The first ones are more relevents than the lasts ones. Strategy for the selective search segmentation algorithm. The class implements a generic stragery for the algorithm described in @cite uijlings2013selective. Creates instance by raw pointer Releases managed resources Set a initial image, with a segementation. The input image. Any number of channel can be provided A segementation of the image. The parameter must be the same size of img. The sizes of different regions If not set to -1, try to cache pre-computations. If the same set og (img, regions, size) is used, the image_id need to be the same. Return the score between two regions (between 0 and 1) The first region The second region Inform the strategy that two regions will be merged The first region The second region Color-based strategy for the selective search segmentation algorithm. The class is implemented from the algorithm described in @cite uijlings2013selective. Creates instance by raw pointer Create a new color-based strategy Size-based strategy for the selective search segmentation algorithm. The class is implemented from the algorithm described in @cite uijlings2013selective. Creates instance by raw pointer Create a new size-based strategy Texture-based strategy for the selective search segmentation algorithm. The class is implemented from the algorithm described in @cite uijlings2013selective. Creates instance by raw pointer Create a new size-based strategy Fill-based strategy for the selective search segmentation algorithm. The class is implemented from the algorithm described in @cite uijlings2013selective. Creates instance by raw pointer Create a new fill-based strategy Regroup multiple strategies for the selective search segmentation algorithm Creates instance by raw pointer Set a initial image, with a segementation. The input image. Any number of channel can be provided A segementation of the image. The parameter must be the same size of img. The sizes of different regions If not set to -1, try to cache pre-computations. If the same set og (img, regions, size) is used, the image_id need to be the same. Return the score between two regions (between 0 and 1) The first region The second region Inform the strategy that two regions will be merged The first region The second region Create a new multiple strategy Create a new multiple strategy and set one subtrategy The first strategy Create a new multiple strategy and set one subtrategy The first strategy The second strategy Create a new multiple strategy and set one subtrategy The first strategy The second strategy The third strategy Create a new multiple strategy and set one subtrategy The first strategy The second strategy The third strategy The forth strategy Class implementing edge detection algorithm from @cite Dollar2013 : Creates instance by raw pointer Releases managed resources Creates a StructuredEdgeDetection name of the file where the model is stored optional object inheriting from RFFeatureGetter. You need it only if you would like to train your own forest, pass null otherwise Returns array containing proposal boxes. edge image. orientation map. proposal boxes. The function detects edges in src and draw them to dst. The algorithm underlies this function is much more robust to texture presence, than common approaches, e.g.Sobel source image (RGB, float, in [0;1]) to detect edges destination image (grayscale, float, in [0;1]) where edges are drawn The function computes orientation from edge image. edge image. orientation image. The function edgenms in edge image and suppress edges where edge is stronger in orthogonal direction. edge image from detectEdges function. orientation image from computeOrientation function. suppressed image (grayscale, float, in [0;1]) radius for NMS suppression. radius for boundary suppression. multiplier for conservative suppression. enables/disables parallel computing. cv::xphoto functions The function implements different single-image inpainting algorithms. source image, it could be of any type and any number of channels from 1 to 4. In case of 3- and 4-channels images the function expect them in CIELab colorspace or similar one, where first color component shows intensity, while second and third shows colors. Nonetheless you can try any colorspaces. mask (CV_8UC1), where non-zero pixels indicate valid image area, while zero pixels indicate area to be inpainted destination image see OpenCvSharp.XPhoto.InpaintTypes Implements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms. Input three-channel image in the BGR color space (either CV_8UC3 or CV_16UC3) Output image of the same size and type as src. gain for the B channel gain for the G channel gain for the R channel Creates an instance of GrayworldWB Creates an instance of LearningBasedWB Path to a .yml file with the model. If not specified, the default model is used Creates an instance of SimpleWB The function implements simple dct-based denoising http://www.ipol.im/pub/art/2011/ys-dct/ source image destination image expected noise standard deviation size of block side where dct is computed Performs image denoising using the Block-Matching and 3D-filtering algorithm (http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf) with several computational optimizations.Noise expected to be a gaussian white noise. Input 8-bit or 16-bit 1-channel image. Output image of the first step of BM3D with the same size and type as src. Output image of the second step of BM3D with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. Size in pixels of the template patch that is used for block-matching. Should be power of 2. Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize. Block matching threshold for the first step of BM3D (hard thresholding), i.e.maximum distance for which two blocks are considered similar.Value expressed in euclidean distance. Block matching threshold for the second step of BM3D (Wiener filtering), i.e.maximum distance for which two blocks are considered similar. Value expressed in euclidean distance. Maximum size of the 3D group for collaborative filtering. Sliding step to process every next reference block. Kaiser window parameter that affects the sidelobe attenuation of the transform of the window.Kaiser window is used in order to reduce border effects.To prevent usage of the window, set beta to zero. Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results. Step of BM3D to be executed. Allowed are only BM3D_STEP1 and BM3D_STEPALL. BM3D_STEP2 is not allowed as it requires basic estimate to be present. Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported. Performs image denoising using the Block-Matching and 3D-filtering algorithm (http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf) with several computational optimizations.Noise expected to be a gaussian white noise. Input 8-bit or 16-bit 1-channel image. Output image with the same size and type as src. Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise. Size in pixels of the template patch that is used for block-matching. Should be power of 2. Size in pixels of the window that is used to perform block-matching. Affect performance linearly: greater searchWindowsSize - greater denoising time. Must be larger than templateWindowSize. Block matching threshold for the first step of BM3D (hard thresholding), i.e.maximum distance for which two blocks are considered similar.Value expressed in euclidean distance. Block matching threshold for the second step of BM3D (Wiener filtering), i.e.maximum distance for which two blocks are considered similar. Value expressed in euclidean distance. Maximum size of the 3D group for collaborative filtering. Sliding step to process every next reference block. Kaiser window parameter that affects the sidelobe attenuation of the transform of the window.Kaiser window is used in order to reduce border effects.To prevent usage of the window, set beta to zero. Norm used to calculate distance between blocks. L2 is slower than L1 but yields more accurate results. Step of BM3D to be executed. Allowed are only BM3D_STEP1 and BM3D_STEPALL. BM3D_STEP2 is not allowed as it requires basic estimate to be present. Type of the orthogonal transform used in collaborative filtering step. Currently only Haar transform is supported. BM3D algorithm steps Execute all steps of the algorithm Execute only first step of the algorithm Execute only second step of the algorithm various inpainting algorithms This algorithm searches for dominant correspondences(transformations) of image patches and tries to seamlessly fill-in the area to be inpainted using this transformations BM3D transform types Un-normalized Haar transform Gray-world white balance algorithm. Creates an instance of GrayworldWB Maximum saturation for a pixel to be included in the gray-world assumption. Applies white balancing to the input image. Input image White balancing result More sophisticated learning-based automatic white balance algorithm. Creates an instance of LearningBasedWB Path to a .yml file with the model. If not specified, the default model is used Defines the size of one dimension of a three-dimensional RGB histogram that is used internally by the algorithm. It often makes sense to increase the number of bins for images with higher bit depth (e.g. 256 bins for a 12 bit image). Maximum possible value of the input image (e.g. 255 for 8 bit images, 4095 for 12 bit images) Threshold that is used to determine saturated pixels, i.e. pixels where at least one of the channels exceeds Applies white balancing to the input image. Input image White balancing result Implements the feature extraction part of the algorithm. Input three-channel image (BGR color space is assumed). An array of four (r,g) chromaticity tuples corresponding to the features listed above. A simple white balance algorithm that works by independently stretching each of the input image channels to the specified range. For increased robustness it ignores the top and bottom p% of pixel values. Creates an instance of SimpleWB Input image range maximum value. Input image range minimum value. Output image range maximum value. Output image range minimum value. Percent of top/bottom values to ignore. Applies white balancing to the input image. Input image White balancing result The base class for auto white balance algorithms. Applies white balancing to the input image. Input image White balancing result P/Invoke methods of OpenCV 2.x C++ interface Is tried P/Invoke once Static constructor Load DLL files dynamically using Win32 LoadLibrary Checks whether PInvoke functions can be called Returns whether the OS is Windows or not Returns whether the OS is *nix or not Returns whether the runtime is Mono or not Custom error handler to be thrown by OpenCV Custom error handler to ignore all OpenCV errors Default error handler C++ std::string Releases unmanaged resources string.size() Converts std::string to managed string Win32API Wrapper Handles loading embedded dlls into memory, based on http://stackoverflow.com/questions/666799/embedding-unmanaged-dll-into-a-managed-c-sharp-dll. This code is based on https://github.com/charlesw/tesseract The default base directory name to copy the assemblies too. Map processor Used as a sanity check for the returned processor architecture to double check the returned value. Additional user-defined DLL paths constructor Get's the current process architecture while keeping track of any assumptions or possible errors. Determines if the dynamic link library file name requires a suffix and adds it if necessary. Given the processor architecture, returns the name of the platform. Releases managed resources Releases unmanaged resources Class to get address of specified jagged array Releases unmanaged resources Name of library to be loaded Name of function to be called Pointer which retrieved by LoadLibrary Pointer which retrieved by GetProcAddress Delegate which is converted from function pointer Constructor Name of library Name of function Releases unmanaged resources IEnumerable<T> extension methods for .NET Framework 2.0 Enumerable.Select Enumerable.Select -> ToArray Enumerable.Select -> ToArray Enumerable.Select -> ToArray Enumerable.Select -> ToArray Enumerable.Where Enumerable.Where -> ToArray Enumerable.ToArray Enumerable.Any Enumerable.Any Enumerable.All Enumerable.Count Enumerable.Count Checks whether PInvoke functions can be called DllImportの際にDllNotFoundExceptionかBadImageFormatExceptionが発生した際に呼び出されるメソッド。 エラーメッセージを表示して解決策をユーザに示す。 Provides information for the platform which the user is using OS type Runtime type Original GCHandle that implement IDisposable Destructor Class to get address of string array Class that converts structure into pointer and cleans up resources automatically (generic version) Pointer Structure Size of allocated memory Substitute of System.Action Represents std::vector vector.size() &vector[0] Convert std::vector<T> to managed array T[] Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array 各要素の参照カウントを1追加する Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array structure that has two float members (ex. CvLineSegmentPolar, CvPoint2D32f, PointF) Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array structure that has two float members (ex. CvLineSegmentPolar, CvPoint2D32f, PointF) Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array structure that has four int members (ex. CvLineSegmentPoint, CvRect) Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array structure that has four int members (ex. CvLineSegmentPoint, CvRect) Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array structure that has four int members (ex. CvLineSegmentPoint, CvRect) Releases unmanaged resources vector.size() &vector[0] Converts std::vector to managed array Converts std::vector to managed array structure that has four int members (ex. CvLineSegmentPoint, CvRect) Releases unmanaged resources vector.size() vector[i].size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() vector[i].size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() vector[i].size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() vector[i].size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() vector[i].size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() vector.size() &vector[0] Converts std::vector to managed array Releases unmanaged resources vector.size() vector[i].size() &vector[0] Converts std::vector to managed array