菜单
  

    Fig. 8 2D visual servoing architecture simulation.  Predictive Visual Servoing implementation. In this approach our goal is also to control the relative pose of the Robot in respect to the target. In a similar way the model corresponds to Fig. 8 but substituting the controller – in a first case is used a GPC and in another is used a MPC controller. In both experiments all the conditions and characteristics of the robot are the same. The goal is to control the end effector from the image error between a current image and desire image. 5.2  Visual servoing control results   Visual Servoing using a PI Controller.  To eliminate the position error was chosen a  PI controller. The point coordinates in operational coordinates are:  pi  = [0.35 –0.15 0.40 π 0 π]T pd = [0.45 –0.10 0.40 π 0 π]T  The points pi and pd correspond to the Robot position from which the images used to control the robot are obtained.   In Fig. 7 it can be observed the translation and rotation of the end-effector around ox, oy and oz axis.            Fig. 9 Visual servoing using PI control.  Predictive GPC and MPC Visual servoing control. In both experiments a 2D visual servoing architecture were used. From figures 9, 10 and 11 it can be seen that the GPC has a more linear trajectory and is faster. The rise time is around 0.6s for the PI while for the GPC and MPC are 0.1s and 0.2s, respectively. The settling time is 1s for the  PI, 0.3s for the GPC and 0.9s for the MPC. The results are less accurate in turn of z and for rotation of the end-effector.    Fig. 10 Results of a 2D Visual servoing architecture using a GPC controller.   Fig 11 Results of a 2D Visual servoing architecture using a MPC controller.                    
      ZOH q  iin q  out Puma 560 + control Jr v J+−     S CT pd  dp,pd dp P-P* S0 Table 1 presents the computed errors for each algorithm which reveals the best performance for the GPC.  TABLE1   r.m.s. values for the control algorithm SSR Tx Ty Tz  θx  θy  θz error PI 2.50 2.40 1.20  2.20 0.30 0.47 1.51 GPC 2.14 0,81 0.22  1.84 0.22 0.02 0.87 MPC 1.36 0.67 3.01  0.76 6.3 6.21 3.04   6. EXPERIMENTAL PROCEDURE  The experimental implementation of the proposed vision control algorithms was performed through the whole simulation system previously developed used as platform. In spite of the presented simulation works had been developed in an “eye in hand” configuration, the experimental works were performed according to an “eye to hand” one.  This fact is related with the necessity of protecting the camera which was placed outside the robot allowing from this way a higher security system in the initial phase.   The homogeneous transformation matrix which relates the camera frame with the robot frame,wc T , for the used configuration is given by:  1 0 0 0.452100 10.450100.91000 1wTc− ⎡⎤⎢⎥− ⎢⎥ =⎢⎥ −⎢⎥⎣⎦   (27)  The application consisted in the robot control using in a first one a PI controller and in a second one a generalized predictive controller, GPC.  6.1 Implemented system configuration  In the experimental developed work the potentiality given by the XPC Target and Matlab Simulink was used. Through this technology is possible to create an operative system which allows working in real time robot control. There were used two computers (Fig. 12), a Host-PC used for the visual information acquisition and processing and a target PC which receives the processing results from the Host-PC and performs the robot control.  The image acquisition system processes them at a rate between 12 and 20 images per second and sends the processed data to the robot control system (Target PC) through RS232. This external loop control frequency is related with the algorithm weight, the numerical capacity of the computers and with the specific weight of the simulink program. Theoretically the used Vector camera could reach the rate of 300 images per second.  In order to generate the robot control environment it was necessary to replace the original PUMA controller by an open control architecture. This procedure allows the adaptation of the system to different kinds of controllers. In the present case the internal controller was substituted by a velocity controller with gravitic compensation.   Image information ttarget   Fig 12 Experimental Scheme.  The target target view by the camera is shown in figure 13. A planar target with 8 LED’s, placed at the vertices of two squares of 40 mm of side and spacing to each other of 150 mm, is used. In figure 13 is possible to observe the target viewed by the camera.   The choice of the number of points was conditioned by the estimated sensibility from the obtained results in the theoretical study obtained through simulation and by the limitations of the image processing system. In spite of having redundant information this number of points lead to better results as it was verified in the simulation study.   Fig. 13 Target view by the camera.  The experimental works obey to the configurations shown in figure 14.   
  1. 上一篇:六自由度电阻机器人英文文献和中文翻译
  2. 下一篇:光电检测与分选机器英文文献和中文翻译
  1. 汽车乘员舱的声振耦合英文文献和中文翻译

  2. 立体光照成型的注塑模具...

  3. 低频振动的铁路车轴的状...

  4. 接头的形状对沥青塞接头...

  5. 电-气动驱动的垂直计算机...

  6. 开口端纺纱系统中的锥形...

  7. 木质填料聚丙烯复合材料...

  8. 酸性水汽提装置总汽提塔设计+CAD图纸

  9. 河岸冲刷和泥沙淤积的监测国内外研究现状

  10. 乳业同业并购式全产业链...

  11. 中考体育项目与体育教学合理结合的研究

  12. 大众媒体对公共政策制定的影响

  13. 电站锅炉暖风器设计任务书

  14. 当代大学生慈善意识研究+文献综述

  15. 十二层带中心支撑钢结构...

  16. java+mysql车辆管理系统的设计+源代码

  17. 杂拟谷盗体内共生菌沃尔...

  

About

751论文网手机版...

主页:http://www.751com.cn

关闭返回