菜单
  

    In this paper, the visual servoing problemis addressed by coupling nonlinear control theorywith a convenient representationof the visual information used by the robot. The visual representation, which is based on a linear camera model, is extremelycompact to comply with active vision requirements. The devised control law is proven to ensure global asymptotic stabilityin the Lyapunov sense, assuming exact model and state measurements. It is also shown that, in the presence of boundeduncertainties, the closed-loop behavior is characterized by a global attractor. The well known pose ambiguity arising from theuse of linear camera models is solved at the control level by choosing a hybrid visual state vector including both image space(2D) information and 3D object parameters. A method is expounded for on-line visual state estimation that avoids cameracalibration.63464
    Simulation and real-time experiments validate the theoretical framework in terms of both system convergenceand control robustness. ©1999 Elsevier Science B.V. All rights reserved.1. IntroductionVision provides a powerful set of sensory pro-cesses for a robot moving in unstructured environ-ments, since it permits noncontact measurement ofthe external world and increases task accuracy [7].Early vision-based robotic systems operated in anopen-loop fashion using an approach called staticlook-then-move [11]. The accuracy can be largely im-proved by position loop closure based on visual feed-back. The main objective of this approach, referred  Corresponding author. Tel.: +39-055-4796-540fax: +39-055-4796-363E-mail addresses: contice@sssup.it (F. Conticelli), ben@sssup.it(B. Allotta), columbus@dsi.unifi.it (C. Colombo)to as visual servoing [10], is to control the positionof the robot with respect to a target object or a setof target features. Fig. 1 shows the general blocksof a visual servoing system. The system accepts twoinputs: a description of the task to be performed (ref-erence input), and object motion (disturbance input).Visual analysis provides a description of the visualenvironment as related to the current camera position;the description is processed by the controller block,which produces camera motion.Visual servoing systems can be classified as“proper” visual servos or dynamic look-and-movesystems [11]. In the first case, the joint torques arethe output of the visual-based controller, while in thesecond case the output of the controller is a referencetwist screw fed to an inner position/velocity loop. Concerning the control aspects of visual servoing,two main paradigms can be outlined: position-basedand image-based servoing. In the first one, the erroris defined in the three-dimensional (3D) space basedon image feature extraction and relative position esti-mation [15,20]. In such a way, robot tasks are easilyspecified in the robot workspace, but the estimatedquantities used in the feedback are heavily affectedby camera calibration errors. In image-based servo-ing instead, any visual task is described in the imageplane as a desired evolution of object appearance to-wards a goal one, since the error is computed in termsof image features [1,8,9,17,18].The problem of camera positioning with respect aplanar object is also solved in [21] by including 3Dvariables in the state space representation. In that casethe estimation procedure of the rotation matrix and thedistance ratio is based on the homography betweenthe feature points extracted from two images of thesame target object. Notice that the stable control lawproposed in [21] requires the estimation of the axisand angle of rotation at each iteration, and robustnessanalysis with respect to these estimated quantities isnot formally considered.In this paper, we propose a hybrid affine visual ser-voing approach, in which image space (2D) informa-tion and 3D object parameters are used together in thecontrol design of a dynamic look-and-move system. Alinear model of camera–object interaction (affine ob-ject shape transformations) is assumed, thus reducingthe size of visual representation according to active vi-sion requirements [3]. We demonstrate that the partialreintroduction of 3D information in the visual staterepresentation permits to disambiguate image-basedaffine visual servoing directly at the control level.This is obtained without losing the advantages of theimage-based control formulation, since we show thatall control parameters can be estimated in the im-age plane without requiring camera calibration. An-other important contribution arises from using non-linear control theory [12,19] to synthesize a controllaw ensuring global asymptotic stability using the Lya-punov direct method. We also show that the controllaw fulfills robustness requirements, since in the pres-ence of bounded modeling and measurement uncer-tainties the tracking error is also bounded.The paper is organized as follows. In Section 2,visual modeling issues are addressed. In Section3.1 we derive a state space representation of robotcamera–object interaction; in Section 3.2, the controlsystem is synthesized, and stability and robustnessanalysis are carried out; in Section 3.3, it is shownhow to estimate 3D state variables from image planemeasurements.

  1. 上一篇:扭矩笛卡尔阻抗控制技术控制轻量机器人英文文献和中文翻译
  2. 下一篇:对热流道系统注塑工艺英文文献和中文翻译
  1. 汽车乘员舱的声振耦合英文文献和中文翻译

  2. 立体光照成型的注塑模具...

  3. 数控机床英文文献和中文翻译

  4. 工业机械手英文文献和中文翻译

  5. 低频振动的铁路车轴的状...

  6. 接头的形状对沥青塞接头...

  7. 数控加工技术英文文献和中文翻译

  8. 乳业同业并购式全产业链...

  9. 十二层带中心支撑钢结构...

  10. 大众媒体对公共政策制定的影响

  11. 电站锅炉暖风器设计任务书

  12. java+mysql车辆管理系统的设计+源代码

  13. 酸性水汽提装置总汽提塔设计+CAD图纸

  14. 当代大学生慈善意识研究+文献综述

  15. 中考体育项目与体育教学合理结合的研究

  16. 河岸冲刷和泥沙淤积的监测国内外研究现状

  17. 杂拟谷盗体内共生菌沃尔...

  

About

751论文网手机版...

主页:http://www.751com.cn

关闭返回