Difference between revisions of "project01:W12022G3P4"

From uf
Jump to: navigation, search
(Computer Vision)
Line 47: Line 47:
 
== Computer Vision ==
 
== Computer Vision ==
  
We start with a picture of our pieces on a black A3 paper. After which we use the Python script to measure the sizes and centre points of all the pieces. The Google Colab Notebook is linked on the bottom of this page.
+
In the Computer Vision session, we explored using scripts to deal with image-related issues. By calling different kinds of libraries, different functions can be realized through the script like showing, reshaping, and applying filters to an image. We also get a better understanding of how the computer reads an image, in an approach that is very different from how humans perceive it. 
 +
 
 +
The Python script developed in the CV session could be used to create the visual link between the node members and the robotic arm, which is crucial for the later HRI part. The photo of the node and beams placed without overlapping each other would be taken as an input. The script is able to detect the boundary of the background table and the boundaries of each member. With the measurement of the actual table as an input, the pixel per metric transformation would tell the computer the relative size of the other members. The target member then could be selected based on its size.  or based on the coordinate of the centric point of each member.  
 +
 
  
<i>We will update this page after the computer vision workshop!</I>
 
  
  

Revision as of 16:58, 4 April 2022

FInal-banner-13.png

Group 3: Fabio Sala - Thomas Kaasschieter - Yiyin Yu - Yu Chen - Jakob Norén





Computer Vision

In the Computer Vision session, we explored using scripts to deal with image-related issues. By calling different kinds of libraries, different functions can be realized through the script like showing, reshaping, and applying filters to an image. We also get a better understanding of how the computer reads an image, in an approach that is very different from how humans perceive it.

The Python script developed in the CV session could be used to create the visual link between the node members and the robotic arm, which is crucial for the later HRI part. The photo of the node and beams placed without overlapping each other would be taken as an input. The script is able to detect the boundary of the background table and the boundaries of each member. With the measurement of the actual table as an input, the pixel per metric transformation would tell the computer the relative size of the other members. The target member then could be selected based on its size. or based on the coordinate of the centric point of each member.



20220324 165629.jpg


Canvas size

CV canvas size.png


Measure the borders of the objects

CV borders.png


Measure the sizes of each object

CV piece sizes.png


Measure the size of one object

CV one piece size.png


Measure the centre of one object

CV one piece centroid.png


Python script for CV: Google Colab Notebook