Skip to content Skip to sidebar Skip to footer

Remove And Measure A Line Opencv

Links to all images at the bottom I have drawn a line over an arrow which captures the angle of that arrow. I would like to then remove the arrow, keep only the line, and use cv2.m

Solution 1:

Here's a possible solution. The main idea is to identify de "tip" and the "tail" of the arrow approximating some key points. After you have identified both ends, you can draw a line joining both points. It is also an advantage to know which of the endpoints is the tip, because that way you can measure the angle from a constant point.

There's more than one way to achieve this. I choose something that I have applied in the past: I will use this approach to identify the endpoints of the overall shape. My assumption is that the tip will yield more points than the tail. After that, I'll cluster all the endpoints in two groups: tip and tail. I can use K-Means for that, as it will return the mean centers for both clusters. After that, we have our tip and tail points that can be joined easily with a line. These are the steps:

  1. Convert the image to grayscale
  2. Get the skeleton of the image, to normalize the shape to a width of 1 pixel
  3. Apply the method described in the link to get the arrow's endpoints
  4. Divide the endpoints in two clusters and use K-Means to get their centers
  5. Join both endpoints with a line

Let's see the code:

# imports:
import cv2
import numpy as np

# image path
path = "D://opencvImages//"
fileName = "CoXeb.png"# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)

# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
grayscaleImage = 255 - grayscaleImage

# Extend the borders for the skeleton:
extendedImg = cv2.copyMakeBorder(grayscaleImage, 5, 5, 5, 5, cv2.BORDER_CONSTANT)

# Store a deep copy of the crop for results:
grayscaleImageCopy = cv2.cvtColor(extendedImg, cv2.COLOR_GRAY2BGR)

# Compute the skeleton:
skeleton = cv2.ximgproc.thinning(extendedImg, None, 1)

The first step is to get the skeleton of the arrow. As I said, this step is needed prior to the convolution-based method that identifies the endpoints of a shape. Computing the skeleton normalizes the shape to a one pixel width. However, sometimes, if the shape is too close to the "canvas" borders, the skeleton could show some artifacts. This is avoided with a border extension. The skeleton of the arrow is this:

Check that image out. If we identify the endpoints, the tip will exhibit at least3 points, while the tail at least1. That's handy - the tip will always have more points than the tail. If only we could detect those points... Luckily, we can:

# Threshold the image so that white pixels get a value of 0 and# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)

# Set the end-points kernel:
h = np.array([[1, 1, 1],
              [1, 10, 1],
              [1, 1, 1]])

# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)

# Extract only the end-points pixels, those with# an intensity value of 110:
binaryImage = np.where(imgFiltered == 110, 255, 0)
# The above operation converted the image to 32-bit float,# convert back to 8-bit uint
binaryImage = binaryImage.astype(np.uint8)

This endpoint detecting method convolves the skeleton with a special kernel that identifies endpoints. It returns a binary image where all the endpoints have the value 110. After thresholding this mid-result, we get this image, which represents the arrow endpoints:

Nice, as you see, we can group the points in two clusters and get their cluster centers. Sounds like a job for K-Means, because that's exactly what it does. We first need to treat our data, though, because K-Means operates on defined-shaped arrays of float data:

# Find the X, Y location of all the end-points# pixels:
Y, X = binaryImage.nonzero()

# Reshape the arrays for K-means
Y = Y.reshape(-1,1)
X = X.reshape(-1,1)
Z = np.hstack((X, Y))

# K-means operates on 32-bit float data:
floatPoints = np.float32(Z)

# Set the convergence criteria and call K-means:
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret, label, center = cv2.kmeans(floatPoints, 2, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)

# Set the cluster count, find the points belonging# to cluster 0 and cluster 1:
cluster1Count = np.count_nonzero(label)
cluster0Count = np.shape(label)[0] - cluster1Count

print("Elements of Cluster 0: "+str(cluster0Count))
print("Elements of Cluster 1: " + str(cluster1Count))

The last two lines prints the endpoints that are assigned to Cluster 0 Cluster 1, respectively. That outputs this:

Elements of Cluster 0:3Elements of Cluster 1:2

Just as expected - well, kinda. Seems that Cluster 0 is the tip and cluster 2 the tail! But the tail actually got 2 points. If you look the image of the skeleton closely, you'll see there's a small bifurcation at the tail. That's why we, in reality, got two points instead of just one. Alright, let's get the center points and draw them on the original input:

# Look for the cluster of max number of points# That cluster will be the tip of the arrow:
maxCluster = 0if cluster1Count > cluster0Count:
    maxCluster = 1# Check out the centers of each cluster:
matRows, matCols = center.shape

# Store the ordered end-points here:
orderedPoints = [None] * 2# Let's identify and draw the two end-points# of the arrow:for b inrange(matRows):
    # Get cluster center:
    pointX = int(center[b][0])
    pointY = int(center[b][1])
    # Get the "tip"if b == maxCluster:
        color = (0, 0, 255)
        orderedPoints[0] = (pointX, pointY)
    # Get the "tail"else:
        color = (255, 0, 0)
        orderedPoints[1] = (pointX, pointY)
    # Draw it:
    cv2.circle(grayscaleImageCopy, (pointX, pointY), 3, color, -1)
    cv2.imshow("End-Points", grayscaleImageCopy)
    cv2.waitKey(0)

This is the resulting image:

The tip always gets drawn in red while the tail is drawn in blue. Very cool, let's store these points in the orderedPoints list and draw the final line in a new "canvas", with dimension same as the original image:

# Store the tip and tail points:
p0x = orderedPoints[1][0]
p0y = orderedPoints[1][1]
p1x = orderedPoints[0][0]
p1y = orderedPoints[0][1]

# Create a new "canvas" (image) using the input dimensions:
imageHeight, imageWidth = binaryImage.shape[:2]
newImage = np.zeros((imageHeight, imageWidth), np.uint8)
newImage = 255 - newImage

# Draw a line using the detected points:
(x1, y1) = orderedPoints[0]
(x2, y2) = orderedPoints[1]
lineColor = (0, 0, 0)
cv2.line(newImage , (x1, y1), (x2, y2), lineColor, thickness=2)

cv2.imshow("Detected Line", newImage)
cv2.waitKey(0)

The line overlaid on the original image and the new image containing only the line:

Solution 2:

It sounds like you want to measure the angle of the line but because you are measuring a line you drew in the original image, you must now filter out the original image to get an accurate measure of the line...which you drew with coordinates you know the endpoints of?

I guess:

  • make a better filter?
  • draw the line in a blank image and detect angle there?
  • determine the angle from the known coordinates?

Since you were asking for just a line, I tried that...just made a blank image, drew your detected line on it and then used that downstream...

blankIm = np.ones((height, width, channels), dtype=np.uint8)
blankIm.fill(255)
line = cv2.line(blankIm,(cols-1,rightish),(0,leftish),(0,255,0),10)

enter image description here

Post a Comment for "Remove And Measure A Line Opencv"