How to undistort points in camera shot coordinates and obtain corresponding undistorted image coordinates?

undistortpoints
getoptimalnewcameramatrix
cv2 remap
opencv 3 camera calibration
cv perspectivetransform
cvgetquadranglesubpix
convertpointsfromhomogeneous
cv2 triangulatepoints

I use OpenCV to undestort set of points after camera calibration. The code follows.

const int npoints = 2; // number of point specified 

// Points initialization. 
// Only 2 ponts in this example, in real code they are read from file.
float input_points[npoints][2] = {{0,0}, {2560, 1920}}; 

CvMat * src = cvCreateMat(1, npoints, CV_32FC2);
CvMat * dst = cvCreateMat(1, npoints, CV_32FC2);

// fill src matrix
float * src_ptr = (float*)src->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
    for (int ci = 0; ci < 2; ++ci) {
        *(src_ptr + pi * 2 + ci) = input_points[pi][ci];
    }
}

cvUndistortPoints(src, dst, &camera1, &distCoeffs1);

After the code above dst contains following numbers:

-8.82689655e-001 -7.05507338e-001 4.16228324e-001 3.04863811e-001

which are too small in comparison with numbers in src.

At the same time if I undistort image via the call:

cvUndistort2( srcImage, dstImage, &camera1, &dist_coeffs1 );

I receive good undistorted image which means that pixel coordinates are not modified so drastically in comparison with separate points.

How to obtain the same undistortion for specific points as for images? Thanks.

The points should be "unnormalized" using camera matrix.

More specifically, after call of cvUndistortPoints following transformation should be also added:

double fx = CV_MAT_ELEM(camera1, double, 0, 0);
double fy = CV_MAT_ELEM(camera1, double, 1, 1);
double cx = CV_MAT_ELEM(camera1, double, 0, 2);
double cy = CV_MAT_ELEM(camera1, double, 1, 2);

float * dst_ptr = (float*)dst->data.ptr;
for (int pi = 0; pi < npoints; ++pi) {
    float& px = *(dst_ptr + pi * 2);
    float& py = *(dst_ptr + pi * 2 + 1);
    // perform transformation. 
    // In fact this is equivalent to multiplication to camera matrix
    px = px * fx + cx;
    py = py * fy + cy;
}

More info on camera matrix at OpenCV 'Camera Calibration and 3D Reconstruction'

UPDATE:

Following C++ function call should work as well:

std::vector<cv::Point2f> inputDistortedPoints = ...
std::vector<cv::Point2f> outputUndistortedPoints;
cv::Mat cameraMatrix = ...
cv::Mat distCoeffs = ...

cv::undistortPoints(inputDistortedPoints, outputUndistortedPoints, cameraMatrix, distCoeffs, cv::noArray(), cameraMatrix);

Calibrating & Undistorting with OpenCV in C++ (Oh yeah), And because we'll use a chessboard, these points have a definite relations between Then we create two images and get the first snapshot from the camera: So, it is the camera that is moving around, taking different shots of the camera. these coordinates as (0,0,0), (0, 30, 0), etc, you'd get all unknowns in millimeters. Undistorted points, returned as an M-by-2 matrix.The undistortedPoints output contains M [x,y] point coordinates corrected for lens distortion.When you input points as double, the function outputs undistortedPoints as double.

Correct point coordinates for lens distortion, This MATLAB function returns point coordinates corrected for lens distortion. undistortedPoints = undistortPoints( points , cameraParams ) returns point Create an imageDatastore object containing calibration images. Undistort the points contains the intrinsic, extrinsic, and lens distortion parameters of a camera. The matrix maps a 3-D point in homogeneous coordinates onto the corresponding point in the camera's image. This input describes the location and orientation of camera 1 in the world coordinate system. cameraMatrix1 must be a real and nonsparse numeric matrix. You can obtain the camera matrix using the cameraMatrix function.

I also reach this problems, and I take some time to research an finally understand.

<img src="https://i.stack.imgur.com/nmR5P.jpg"></img>

60 questions with answers in CAMERA CALIBRATION, One can obtain the object coordinate with the help of 2 image or image pair at different angel. so does it possible to calibrate with Do it require more 3d point to calibrate camera ? What is the best test to measure shooting accuracy in futsal? Then I calculate the world coordinate of the undistorted corresponding points. In fact, (x, y, z) are the coordinates of a 3D point in the camera coordinate space. 2. How can it be used for? Before explaining how i use it, i'd like to give you the pinhole camera model with distortion consideration, so the equation above is extended as: In my application, i use it to do back projection of 2D image point to 3D model point

Phase correlation - wrong angle of rotation - opencv - html, The logPolar images are (for scale magnitude = 10) But openCV function How to undistort points in camera shot coordinates and obtain corresponding I receive good undistorted image which means that pixel coordinates are not modified  This MATLAB function returns 3-D locations of matching pairs of undistorted image points from two stereo images.

Difference between undistortPoints() and projectPoints() in OpenCV , I want to project a point in 3D space into 2D image coordinates. Use opencv's getOptimalNewCameraMatrix function to compute a new undistorted image's camera matrix K'. for make the undistortion (if your lens is distorting the image like in a fisheye). Take a picture and get the corresponding 2D image coordinates. Map the points of a fisheye image to world coordinates and compare these points to the ground truth points. A series of checkerboard pattern images are used to estimate the fisheye parameters and calibrate the camera. Create a set of checkerboard calibration images.

Camera intrinsic calibration, is used to transform the coordinates from distorted to undistorted images. Adjust the lens to get focused images and adjust the camera parameters to have good that could be found in tutorial/grabber folder to grab single shot images of the grid. points extracted from the undistorted image (vpImageTools::undistort​()). At first you have to obtain a projective ray from the distorted pixel location via the cam2world function. To get the normalized and undistorted image point you then have to divide the X and Y

Comments
  • are you sure camera1 and camera are the same?
  • yes, the same - I have fixed question. Thanks you for pointing out. Moreover initially I receive such behaviour in EmguCV and then tried to figure out where such behaviour is introduced: in OpenCV or EmguCV itself. It appeared in OpenCV.
  • What do you mean when you say 1 channel? I use constant CV_32FC2 in cvCreateMat, which means 2 channels. Thanks.
  • I have looked at the source code of the undistortPoints() func, and the documentation is wrong. Look at the edit above for details, please