How to solve 3D squares

DE112010002174B4 - Method and device for a practical 3D vision system - Google Patents

Method and device for a practical 3D vision system Download PDF

info

Publication number
DE112010002174B4
DE112010002174B4DE112010002174.0TDE112010002174TDE112010002174B4DE 112010002174 B4DE112010002174 B4DE 112010002174B4DE 112010002174 TDE112010002174 TDE 112010002174TDE 112010002174 B4DE112010002174 B4B4 .201000
Authority
DE
Germany
Prior art keywords
interest
camera
double
area
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
DE112010002174.0T
Other languages
German (de)
Other versions
DE112010002174T5 (en
Inventor
Aaron Wallack
David Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cognex Technology and Investment LLC
Original assignee
Cognex Technology and Investment LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US12 / 474,778priorityCriticalpatent / US9533418B2 / en
Priority to US12 / 474,778priority
Application filed by Cognex Technology and Investment LLCfiledCriticalCognex Technology and Investment LLC
Priority to PCT / US2010 / 036171 priority patent / WO2010138565A2 / en
Publication of DE112010002174T5publicationCriticalpatent / DE112010002174T5 / en
Application granted granted Critical
Publication of DE112010002174B4publicationCriticalpatent / DE112010002174B4 / en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Left

  • 238000003384imaging methodMethods0.000claimsabstractdescription18
  • 238000000034methodsMethods0.000claimsdescription55
  • 230000001702transmitterEffects0.000claimsdescription5
  • 238000007689inspectionMethods0.000description16
  • 239000011159matrix materialsSubstances0.000description13
  • 230000000007 visual effectEffects0.000description10
  • 238000004458analytical methodsMethods0.000description8
  • 230000000875correspondingEffects0.000description7
  • 280000638271 Reference Pointcompanies0.000description4
  • 101710066465endo-1,6-beta-glucanaseProteins0.000description4
  • 239000011800void materialsSubstances0.000description4
  • 241000208140AcerSpecies0.000description3
  • 230000000694effectsEffects0.000description3
  • 238000005516 engineering processesMethods0.000description3
  • 230000014509gene expressionEffects0.000description3
  • 230000004807localizationEffects0.000description3
  • 238000004519 manufacturing processMethods0.000description3
  • 238000003909pattern recognitionMethods0.000description3
  • 238000004805roboticsMethods0.000description3
  • 239000000284extractsSubstances0.000description2
  • 238000010191 image analysisMethods0.000description2
  • 238000000691 measurement methodMethods0.000description2
  • 238000003908quality control methodsMethods0.000description2
  • 230000003068staticEffects0.000description2
  • 281000001403Companiescompanies0.000description1
  • 280001002379Emanatecompanies0.000description1
  • 281000019761Intel, Corp.companies0.000description1
  • 101710054140KIF1AProteins0.000description1
  • 281000081396Maplesoftcompanies0.000description1
  • 241000022844PraxisSpecies0.000description1
  • 281000174053Waterloo Maplecompanies0.000description1
  • 230000004075alterationEffects0.000description1
  • 238000005520 cutting processMethods0.000description1
  • 239000000203mixturesSubstances0.000description1
  • 230000003287opticalEffects0.000description1
  • 238000002360 preparation methodsMethods0.000description1
  • 238000000275quality assuranceMethods0.000description1
  • 238000001228spectrumMethods0.000description1
  • 230000001960triggeredEffects0.000description1

Images

Classifications

    • B — PERFORMING OPERATIONS; TRANSPORTING
    • B25-HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25J-MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9 / 00 — Program-controlled manipulators
    • B25J9 / 16 — Program controls
    • B25J9 / 1694 — Program controls characterized by use of sensors other than normal servo feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9 / 1697 — Vision controlled systems
    • G — PHYSICS
    • G06-COMPUTING; CALCULATING; COUNTING
    • G06T — IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7 / 00 — Image analysis
    • G06T7 / 0002 — Inspection of images, e.g. flaw detection
    • G06T7 / 0004 — Industrial image inspection
    • G — PHYSICS
    • G06-COMPUTING; CALCULATING; COUNTING
    • G06T — IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207 / 00 — Indexing scheme for image analysis or image enhancement
    • G06T2207 / 20 — Special algorithmic details
    • G06T2207 / 20092 — Interactive image processing based on input by user
    • G06T2207 / 20104 — Interactive definition of region of interest [ROI]

Abstract

Description

  • The invention relates to machine vision, and more particularly to three-dimensional (3D) machine vision. The invention relates to machine vision and in particular three-dimensional (3D) machine vision. The invention finds application in manufacturing, quality control and robotics, to mention only a few areas.
  • Machine vision refers to the automated analysis of images to determine the properties of objects shown in them. It is often used on automated production lines where images of components are analyzed to facilitate part picking and to determine part placement and orientation for assembly. When robots are the means for automated assembly and automated image analysis is used to facilitate piece removal, placement and alignment, the system is referred to as vision-aided robotics. Machine vision refers to the automated analysis of images to determine the properties of objects shown in them. It is often used in automated production lines where images of components are analyzed to facilitate piece removal and to determine piece placement and orientation for assembly. If robots represent the means for automated assembly and automated image analysis is used to facilitate the removal, placement and alignment of parts, the system is known as vision-based robot technology. Machine vision is also used for robot navigation, for example to ensure the detection of scenes when robots move through environments.
  • Although three-dimensional (3D) analysis has long been discussed in the literature, most machine vision systems today rely on two-dimensional (2D) image analysis. For this it is usually necessary that objects of investigation are "displayed" to the visual system in restricted orientations and locations. A conveyor belt is commonly used for this purpose. Objects to be assembled or examined are usually placed on the tape with a certain known, stable 3D configuration, but with an unknown position and orientation, and are moved into the field of view of the visual system. Based on the 2D pose (ie position and orientation) of an object in the field of view, and taking into account the fact that it is placed on the conveyor (thus ensuring its "location" and distance from the vision system camera), the system turns 2D -Geometry to determine the exact 3D pose of the object and / or its conformity with an expected appearance. Although three-dimensional (3D) analysis has long been discussed in the literature, most machine vision systems today rely on two-dimensional (2D) image analysis. To do this, it is usually necessary for the visual system to be “shown” objects in restricted orientations and locations. A conveyor belt is usually used for this purpose. Objects to be assembled or examined are usually placed on the belt with a certain known, stable 3D configuration, but with an unknown position and orientation, and moved into the field of view of the visual system. Based on the 2D pose (ie, position and orientation) of an object in the field of view, and taking into account the fact that it is placed on the conveyor (thereby ensuring its "location" and distance from the vision system camera), the system turns 2D Geometry to determine the exact 3D pose of the item and / or its correspondence with an expected appearance.
  • Examples in which such a 2D analysis is used are in the previous work of the local owner, including the US patents entitled “Methods and apparatus for machine vision inspection using single and multiple templates or patterns” Machine vision inspection with single and multiple templates and samples], with the title “Machine vision methods for inspection of leaded components”, with the title “Nonfeedback-based machine vision methods for determining a calibration relationship between a camera and a moveable object ”, entitled“ Machine vision calibration targets and methods of determining their location and orientation in an image ” and procedures for determining their Position and orientation in an image], entitled "Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object" , with the title "Machine vision methods using feedback to determine an orientation, pixel width and pixel height of a field of view", with the title "Nonfeedback- based machine vision method for determining a calibration relationship between a camera and a moveable object "[Machine vision method that is not based on feedback to determine a calibration relationship between a camera and a movable object], with the title" Fast high-accuracy multi-dimensional pattern localization "[Fast, high-precision multi-dimension onale pattern localization], with the title “Fast high-accuracy multi-dimensional pattern inspection” and with the title “Fast high-accuracy multi-dimensional pattern inspection”, to name just a few Examples of such 2D analysis have been used in previous work by the owner, including the US patents with the title "Methods and apparatus for machine vision inspection using single and multiple templates or patterns" [machine and apparatus for machine vision inspection with single and multiple templates and patterns], with the title "Machine vision methods for inspection of leaded components" [machine vision methods for the inspection of wired components], entitled “Nonfeedback-based machine vision methods for determining a calibration relationship between a camera and a moveable object ”[machine vision methods that are not based on feedback, for determining a calibration relative ionship between a camera and a moving object], with the title "Machine vision calibration targets and methods of determining their location and orientation in an image" [machine vision calibration targets and methods for determining their position and orientation in an image], entitled " Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object ", using machine vision methods using feedback to determine calibration positions of multiple cameras that image a common object], titled" Machine vision methods using feedback to determine an orientation , pixel width and pixel height of a field of view "[machine vision methods using feedback to determine orientation, pixel width and pixel height of a field of view], entitled“ Nonfeedback-based machine vision method for determining a calibration relationship between a camera and a moveable object ”[machine vision method that is not based on feedback, for determining a calibration re lationship between a camera and a moving object], with the title "Fast high-accuracy multi-dimensional pattern localization", with the title "Fast high-accuracy multi-dimensional pattern inspection" and titled “Fast high-accuracy multi-dimensional pattern inspection ”, to name just a few.
  • With increasing reliance on robotics, the need for practical 3D vision systems has emerged everywhere from factory floors to private homes. This is because, in many of these environments, items under inspection are not necessarily constrained in overall position and orientation, as may otherwise be the case, for example, with items presented on a conveyor belt. That is, the precise 3D configuration of the object may be unknown. With increasing reliance on robotics, the need for practical 3D vision systems has arisen everywhere from factory halls to private homes. This is because, in many of these environments, items that are subject to inspection are not necessarily restricted in an overall position and location, such as may otherwise be the case with items presented on a conveyor belt. That is, the precise 3D configuration of the item may be unknown.
  • To incorporate the additional degrees of freedom of pose and position in a 3D scene, 3D vision tools are helpful, if not necessary. Examples of these are US Patents Nos. Entitled "System and method for registering patterns transformed in six degrees of freedom using machine vision" and with the title "System and method for determining the position of an object in three dimensions using a machine vision system with two cameras" [System and method for determining the position of an object in three dimensions using a machine vision system with two cameras] .3D vision tools are helpful, if not necessary, to include the additional degrees of freedom of pose and position in a 3D scene. Examples of these are U.S. Patent No. with the title "System and method for registering patterns transformed in six degrees of freedom using machine vision" [system and method for registering patterns, transformed in six degrees of freedom by machine vision] and titled "System and method for determining the position of an Object in three dimensions using a machine vision system with two cameras "[System and method for determining the position of an object in three dimensions using a machine vision system with two cameras].
  • A method and a device for a practical 3D vision system are described from FIG. From the describes a method and an apparatus for a practical 3D vision system. Other methods and devices for vision systems are from the, the, the and the - known.
  • Other machine vision techniques have been suggested in the art. Some require too much processing power to be practical for real-time application. Others require that objects under inspection go through complex registration processes and / or that during run time many of the features of the objects are simultaneously visible in the field of view of the machine vision system. Other machine vision techniques have been proposed in the technical field. Some require too much processor power to be practical for real-time use. Others require that items that are subject to inspection go through complex registration processes and / or that many of the features of the items are simultaneously visible in the field of view of the machine vision system during runtime.
  • Outside the realm of machine vision, the technology also provides contact-based methods for determining 3D poses - such as the use of an x, y, z measuring machine with a touch sensor. However, this requires contact, is relatively slow, and may require manual intervention. Methods based on electromagnetic waves for determining 3D poses have also been offered. These do not require physical contact, but suffer from separate drawbacks, such as requiring the often impractical step of attaching transmitters to objects under inspection. Outside of the field of machine vision, the technology also provides contact -based methods for determining 3D poses - such as the use of an x, y, z measuring machine with a touch sensor. However, this requires contact, is relatively slow, and may require manual intervention. Methods based on electromagnetic waves for determining 3D poses have also been offered. These do not require physical contact, but suffer from separate disadvantages, such as that they require the often impractical step of attaching transmitters to items that are subject to inspection.
  • An object of the present invention is to provide improved methods and devices for machine vision and in particular for machine vision in 3D.
  • A related object of the present invention is to provide such methods and devices that have a range of practical applications including, but not limited to, manufacturing, quality control, and robotics.
  • Another related object of the invention is to provide such methods and devices that allow, for example, the determination of position and pose in three-dimensional space.
  • Yet another related object of the invention is to provide such methods and devices that impose reduced restrictions, for example on the overall position and location, of items being inspected.
  • Yet another related object of the invention is to provide such methods and devices that minimize requirements for registration of items that are subject to inspection.
  • A still further related object of the invention is to provide such methods and devices that can be implemented in current and future machine vision platforms.
  • The objects set out above are among the objects to be achieved by the invention, which, inter alia, includes methods and devices for determining the pose, i. H. provides the position along the x, y and z axes, pitch, roll and yaw (or one or more properties of the pose) of an object in three dimensions by triangulating data collected from multiple images of the object . The above-mentioned tasks are among the tasks that are solved by the invention, which among other things methods and devices for determining the pose, i. H. provides the position along the x, y, and z axes, pitch, roll, and yaw (or one or more properties of the pose) of an object in three dimensions by triangulating data collected from multiple images of the object,
  • Thus, for example in one aspect, the invention provides a method for machine vision in 3D, in which, during a calibration step, a plurality of cameras which are arranged to capture images of the object from different respective viewpoints are calibrated in order to perform an imaging function, the rays identified in 3D space that emanate from the lens of each respective camera and correspond to pixel locations in the field of view of this camera. In a training step, the functionality connected to the cameras is trained in order to recognize expected patterns in images that are to be captured of the object. A run-time step triangulates locations in 3D space from one or more of these patterns of pixels relating to positions of these patterns in images of the object and from the images perceived during the calibration step 3D in which, during a calibration step, multiple cameras arranged to capture images of the object from different respective viewpoints are calibrated to perform an imaging function, the rays identified in 3D space that emanate from the lens of each camera and correspond to pixel locations in the field of view of this camera. In a training step, the functionality associated with the cameras is trained to recognize expected patterns in images that are to be captured by the object. A runtime step triangulates locations in 3D space from one or more of these patterns of pixel-related positions of these patterns in images of the object and from the images perceived during the calibration step.
  • Further aspects of the invention provide methods as described above, in which the runtime step triangulates locations from images of the object that were recorded essentially simultaneously by the multiple cameras.
  • Still further objects of the invention provide such methods which include a recalibration step in which runtime images of the object are used to perform the above-mentioned imaging function, for example for a camera that has got out of calibration. Thus, for example, when one camera generates images in which the patterns appear to be in locations (for example, when they are mapped onto the 3-D rays for that camera) that are inconsistent with images from the other cameras and / or essentially do not match (for example, if they are imaged using their respective 3D rays), with the images from these other cameras certain pattern locations are used to recalibrate one camera. which include a recalibration step in which runtime images of the object are used to perform the imaging function mentioned above, for example for a camera that has gotten out of the calibration. Thus, for example, if a camera creates images in which the patterns appear to be in locations (for example, if they are imaged on the 3D rays for that camera), they may be inconsistent with images from the other cameras and / or essentially do not match (for example, if they are imaged using their respective 3D rays), certain pattern locations are used with the images from these other cameras to recalibrate one camera.
  • Still further aspects of the invention provide methods as described above in which the calibration step involves positioning registration targets (such as center markers, crosshairs or the like, for example on calibration plates or otherwise) at known positions in 3D space and recording them - or otherwise Characterize, for example algorithmically, of correlations between these positions and the pixel-related locations of the respective targets in the fields of view of the cameras. Related aspects of the invention provide such methods in which one or more of these registration targets, calibration plates, etc. are used to calibrate multiple cameras at the same time, for example by means of simultaneous imaging. Still other aspects of the invention provide methods as described above , in which the calibration step involves positioning registration targets (such as center marks, crosshairs or the like, for example on calibration plates or otherwise) at known positions in 3D space and recording - or otherwise characterize, for example algorithmic - of correlations between these positions and the pixel-related locations of the respective targets in the visual fields of the cameras. Related aspects of the invention provide such methods in which one or more of these registration targets, calibration plates, etc. are used to calibrate multiple cameras at the same time, for example by means of simultaneous imaging.
  • Other aspects of the invention provide methods as described above, in which the calibration step includes performing an imaging function for each camera that takes into account distortion in the field of view.
  • Further aspects of the invention include methods as described above, in which the training step involves training the functionality associated with the cameras to recognize expected patterns, such as letters, numbers, other symbols (such as registration targets), corners or other perceptible features (such as the Other aspects of the invention include, as described above, methods in which the training step trains the functionality associated with the cameras to recognize expected patterns, such as letters, numbers, other symbols (such as registration targets), corners, or other perceptible features (such as example, dark and light spots) of the object, and are known for example for the measurement techniques and search / detection models in the technical field.
  • Further related aspects of the invention provide such methods in which the training step involves training the above functionality with respect to the "model points" - i. H. expected locations in the 3D space of the pattern (for example, absolute or relative) on objects that are checked during the runtime. In combination with the triangulated 3D locations perceived from these images, this information can be used during the runtime step to perceive the pose of this object. mentioned functionality with regard to the “model points” - i. H. expected locations in the 3D space of the pattern (for example, absolute or relative) on objects that are checked in the runtime. In combination with the triangulated 3D locations perceived from these images, this information can be used during the runtime step to perceive the pose of this object.
  • In accordance with aspects of the invention, training for expected locations of the patterns (i.e., model points) includes finding 2D poses of a reference point (or "origin") of each such pattern. For patterns expected to appear in the fields of view of two or more cameras, such reference points facilitate triangulation, as described below, for purposes of determining the position of these patterns (and therefore the object) in 3D space. In accordance with aspects of the invention, training for expected locations of the patterns (ie, model points) includes finding 2D poses of a reference point (or "origin") from each such pattern. For patterns that are expected to appear in the field of view of two or more cameras, such reference points facilitate triangulation, as described below, for the purpose of determining the position of these patterns (and therefore the object) in 3D space.
  • Related aspects of the invention provide such methods in which the training with regard to expected patterns includes the use - within the functionality assigned to each camera - of the same models for training the same expected patterns as between different cameras. This has the advantage of ensuring that the reference points (or origins) for patterns found at runtime, such as between images obtained from these different cameras, coincide. Related aspects of the invention provide such methods in which the expected pattern training involves using, within the functionality associated with each camera, the same models to train the same expected patterns as between different cameras. This has the advantage of ensuring that the reference points (or origins) for patterns found in the runtime coincide, such as between images obtained from these different cameras.
  • Further related aspects of the invention provide such methods in which training for expected patterns includes using, within the functionality associated with each camera, different models for the same pattern as between different cameras. This makes it easier to find patterns, for example if the pose, angle of view and / or obstacles change the path in which different cameras will map these patterns. within the functionality associated with each camera - different models for the same pattern as between different cameras. This makes it easier to find patterns, for example if the pose, viewing angle and / or obstacles change the way in which different cameras will display these patterns.
  • Related aspects of the invention provide such methods that include training the selection of reference points (or origins) from such modeled patterns. Such training can be performed, for example, by an operator using, for example, a laser pointer or something else, to ensure that these reference points (or origins) coincide as between images obtained from these different cameras. Related aspects of the invention provide such methods that involve training the selection of reference points (or origins) from such modeled patterns. Such training can be performed, for example, by an operator using, for example, a laser pointer or something else, to ensure that these reference points (or origins) coincide, such as between images obtained from these different cameras.
  • Related aspects of the invention provide such methods in which the training step includes sensing the location of the patterns, for example using a triangulation methodology similar to that used during the run-time phase. Alternatively, the expected (relative) locations of the patterns can be entered by the operators and / or perceived by other measurement methodologies. Related aspects of the invention provide such methods in which the training step involves perceiving the location of the patterns, for example by using a triangulation methodology similar to that used during the runtime phase. Alternatively, the expected (relative) locations of the patterns can be entered by the operators and / or perceived by other measurement methodologies.
  • Further related aspects of the invention provide such methods in which the training step includes finding an expected pattern in an image from one (or more) cameras based on prior identification of that pattern in an image from another camera. For example, once the operator has identified an expected pattern in an image captured by one camera, the training step may automatically include finding that same pattern in images from the other cameras the training step includes finding an expected pattern in an image from one (or more) camera (s) based on previously identifying that pattern in an image from another camera. For example, once the operator has identified an expected pattern in an image captured by one camera, the training step may automatically include finding that same pattern in images from the other cameras.
  • Still further aspects of the invention provide methods as described above in which the training step captures multiple views of the object for each camera, preferably such that the origins of the patterns found on these objects are consistently defined. To account for a possible inconsistency among the images, those that produce the highest voting result for the patterns can be used. This has the advantage of making the methodology more robust for finding parts in arbitrary poses. Still further aspects of the invention provide methods as described above, in which the training step captures multiple views of the object for each camera, preferably such that the origins of the patterns found on these objects are consistently defined. To take into account a possible inconsistency among the images, those that produce the highest matching result for the patterns can be used. This has the advantage of making the methodology more robust for finding parts in arbitrary poses.
  • In still other aspects of the invention, the run-time step includes triangulating the position of one or more of the patterns in run-time images, for example using pattern matching or other two-dimensional viewing tools, and using the images perceived during the calibration phase to the pixel-related locations To correlate these patterns in the fields of view of the respective camera with the above-mentioned 3D rays on which these patterns lie. In yet other aspects of the invention, the runtime step involves triangulating the position of one or more of the patterns in runtime images, for example using pattern matching or other two-dimensional vision tools, and using the images perceived during the calibration phase around the pixel-related locations correlate this pattern in the field of view of the respective camera with the above-mentioned 3D rays on which these these patterns lie.
  • According to related aspects of the invention, the triangulation of the pattern location can be "direct" triangulation, such as where the location of a given pattern is derived from the point of intersection (or the point of least squares approximation) of several 3D- Rays (from several cameras) on which this pattern lies is determined. As an alternative or in addition to this, the triangulation can be designed “indirectly”, e.g. B. where the location of a given pattern is determined not only from the ray (or rays) on which this pattern lies, but also from (i) the rays on which the other patterns lie, and (ii) the relative locations of these patterns to each other (for example as determined during the training phase). According to related aspects of the invention, the triangulation of the pattern location can be "direct" triangulation, such as where the location of a given pattern is from the intersection (or point of least squares approximation) of multiple 3D rays (from several cameras) on which this pattern lies is determined.Alternatively or additionally, the triangulation can be “indirect”, such as B. where the location of a given pattern is determined not only from the ray (or rays) on which that pattern lies, but also from (i) the rays on which the other patterns lie, and (ii) the relative locations of these patterns to each other (for example, as determined during the training phase).
  • Other aspects of the invention provide methods as described above in which the functionality associated with the cameras is "interrupted" if it cannot find an expected pattern in an image of an object - during training or runtime - thereby creating an undue delay is avoided in the determination of the position, for example if such a pattern is missing, covered or otherwise not recorded an expected pattern in an image of an object - during exercise or at runtime - causing undue delay is avoided when determining the position, for example if such a pattern is missing, covered or otherwise not detected.
  • Still other aspects of the invention correspond to the methods described above in which ID matrix codes (or other patterns whose appearance and / or positions are predefined or otherwise known) are used in place of the patterns discussed above. In these aspects of the invention, the training step is eliminated or reduced. Instead, the 2D positions of these codes (or other patterns) can be perceived from the training phase or runtime images, for example through visual tools designed for generic types of features to map to 3D locations to the methods described above, in which ID matrix codes (or other patterns, the appearance and / or positions of which are predefined or otherwise known) are used instead of the patterns discussed above. In these aspects of the invention, the training step is eliminated or reduced. Instead, the 2D positions of these codes (or other patterns) can be perceived from the training phase or runtime images, for example through visual tools that are developed for generic types of features to map to 3D locations.
  • Still other aspects of the invention correspond to the methods described above in which ID matrix codes (or other patterns whose appearance and / or positions are predefined or otherwise known) are used in place of the patterns discussed above. In these aspects of the invention, the training step is eliminated or reduced. Instead, the 2D positions of these codes (or other patterns) can be perceived from the training phase or runtime images, for example through visual tools designed for generic types of features to map to 3D locations to the methods described above, in which ID matrix codes (or other patterns, the appearance and / or positions of which are predefined or otherwise known) are used instead of the patterns discussed above. In these aspects of the invention, the training step is eliminated or reduced. Instead, the 2D positions of these codes (or other patterns) can be perceived from the training phase or runtime images, for example through visual tools that are developed for generic types of features to map to 3D locations.
  • Still further aspects of the invention provide machine vision systems including, for example, digital processing functionality and cameras that operate in accordance with the above methods. These and other aspects of the invention are apparent from the drawings and the description below. Still further aspects of the invention provide machine vision systems, for example including digital processing functionality and cameras that operate in accordance with the above methods. These and other aspects of the invention are apparent in the drawings and the description below.
  • Yet another related aspect of the invention provides such methods and apparatus which allow an article to be inspected, e.g. B. to determine and validate relative positions of its areas. Such methods and devices can be used, with reference to a non-limiting example, to aid testing and verification, for example during assembly, quality assurance, maintenance or another process that allow an item to be inspected, eg B. to determine and validate relative positions of its areas. Such methods and devices, with reference to one non-limiting example, can be used to support testing and verification, for example during assembly, quality assurance, maintenance, or other operations.
  • Further related aspects of the invention provide such methods and apparatus that infer the absence or misplacement of a part (or other section) of an object where one or more expected patterns (e.g. associated with that part / section) are missing or in runtime images These images are available, but at pixel locations that map to 3D locations that are not expected or desired.Further related aspects of the invention provide such methods and devices that indicate the absence or misplacement of a portion (or other portion) of an object where one or more expected patterns (eg associated with that portion / portion) are missing from or in runtime images these images are present, but in pixel locations that map to 3D locations that are not expected or desired.
  • Still further related aspects of the invention provide such methods and apparatus wherein, during the run-time step, the positions of parts or other portions of the object are determined based on subsets of 3D locations corresponding to patterns found in run-time images, and these are 3D - Locations can be used to determine expected locations for even more patterns. The expected locations of these further patterns can be compared with their actual 3D locations, as they are determined, for example, from the runtime images. Any other related aspects of the invention provide such methods and devices, wherein during the runtime step the positions of parts or other sections of the object are determined based on subsets of 3D locations according to patterns found in runtime images, and these 3D locations can be used to determine expected locations of even more patterns. The expected locations of these further patterns can be compared with their actual 3D locations, as they are determined, for example, from the runtime images. Where position differences identified during the comparison exceed a predetermined tolerance, the system can generate suitable notifications (for example to the operator).