Robotics and Computer-Integrated Manufacturing 40 (2016) 1–13Full length ArticleCollaborative manufacturing with physical human –robot interactionAndrea Cherubini a,n, Robin Passama a, André Crosnier a, Antoine Lasnier b, Philippe Fraisse aa Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier LIRMM, Université de Montpellier - CNRS, 161 Rue Ada, 34392 Montpellier,Franceb Department of Manufacturing Engineering, PSA Peugeot Citro?n, Route de Gisy, 78140 Vélizy Villacoublay, Francea r t i c l e i n f o Article history:Received 18 June 2015 Received in revised form 13 December 2015Accepted 17 December 2015Available online 1 January 2016Keywords:CobotsIndustrial robotics Human –robot interactionReactive and sensor-based controla b s t r a c t Although the concept of industrial cobots dates back to 1999, most present day hybrid human –machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human –robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the ?rst case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is vali- dated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certi ?ed.parts feeding by double manipulators on a mobile base, production process information support for the operator, and safety management for cooperation between operator and robot. The main target of [11] is safety of the shared work cell, in the absence of physical fences between human and robot. Since safety options provided by basic infrared sensors are limited, the authors design a network architecture of these sensors, for tracking user positions, while avoiding collisions. The authors of [12] propose a method for optimizing task distribution among workers and ro- bots. The method is validated, using an ABB Dual Arm Concept Robot, in a PLC Input/Output module assembly scenario.2.2. Research on physical human –robot collaborationRecent robotics research focuses on the study and character- ization of physical human –robot interaction (pHRI [13 ,14] ). The goal is to enable close collaboration between human and robot, in all service and industrial tasks, that require the adaptability ofbecomes indispensable to de?ne safety and dependability metrics [16–19]. These can contribute to the de?nition of standards, such as the recent ISO 10218-1:2011 “Safety requirements for industrial robots ”.2In this line of research, many solutions for realizing safe col- laborative tasks have been explored in recent years. Although these solutions have not yet been transferred to the industry, we hereby list some of the most relevant theoretical works. In [20], a deformation-tracking impedance control strategy is designed to enable robot interaction with environments of unknown geome- trical and mechanical properties. For successful interaction with unknown environments and operators, the robot should behave in a human-like manner. This is the target of the research in [21 ,22] : a human-like learning controller is designed, to minimize motion error and effort, during interaction tasks. Simulations show that this controller is a good model of human –motor adaptation, even in the absence of direct force sensing. A robust controller for a collaborative robot in the automotive industry, is extended in [23] , to manage not only the interaction between an industrial robot and a stiff environment, but also human –robot –environment and human –robot –human –environment interactions.Other researchers have focused more on industrial applica- tions. For example, an industrial robot controller, incorporating compliance of the joints with the environment, is presented in [24] . The desired pose of the tool center point is computed from the force error. Parallel control considers a reference trajectory while allowing feedforward in force controlled directions. Al- though the method is designed for industrial assembly tasks, it does not take into account the presence of a human in the loop. In contrast, Erden and colleagues [25 –27] have thoroughly studied an industrial task that directly involves a human operator, i.e., manual welding. In [25] , a physically interactive controller is developed for a manipulator robot arm: the human applies forces on the robot, to make it behave as he/she likes. Then, a manual welding assis- tant robot is presented in [26] : as the human controls the welding direction and speed, the robot suppresses involuntary vibrations (e.g., caused by novice welders). The results show a considerable improvement in the welders performance when they are assisted. Finally, [27] presents a study of end-point impedance measure- ment at human hand, with professional and novice welders. The results support the hypothesis that impedance measurements could be used as a skill level indicator, to differentiate the welding performance levels. Although the welding assistance application targeted by these works also falls in the shared workplace para- digm evoked in [2] , it differs from the one treated here, since the robot motion is driven by the human worker. Instead, in our work, the robot is active and autonomous during various phases of the assembly cycle. For the same reason, robot programming by de- monstration/teaching is also out of scope here.Other works similar to ours, but targeting manually guided robot operation, are presented in [28 ,29] . In [28] , an operator teaches tasks to a robotic manipulator, by manually guiding its end effector. For this, the authors design a virtual tool, whose dynamics the operator should feel when interacting with the robot. An ad- mittance controller driven by the measurements of a force/torque sensor is designed to ensure the desired virtual dynamic beha- viour. The second paper addresses the problem of controlling a robot arm, executing a cooperative task with a human, who guides the robot through direct physical interaction. This problem is tackled by allowing the end effector to comply according to an impedance control law [30] de?ned in the Cartesian space. Re- dundancy ensures the stability of the coupled human –robot sys- tem, through inertial decoupling at the end effector. However, inhumans to be merged with the high performance of robots in terms of precision, speed and payload [15]. In this context, it 2 www.iso.org/iso/catalogue_detail.htm?csnumber ? 51330A. Cherubini et al. / Robotics and Computer-Integrated Manufacturing 40 (2016) 1–13 3contrast with our work, the robot is torque (not position) con- trolled, and an impedance (instead of admittance) controller is used.As outlined in most of the cited works, an ef?cient cobot must interact easily with the worker, even if s/he is non-trained. To this end, many pHRI works rely on the physical contact (touch) be- tween robot and operator [31] . More recently, to guarantee inter- action even in the absence of direct contact, researchers have proposed the use of pointing gestures [32] , as well as the in- tegration of vision with force [33 –35] . Also, in our work, interac- tion includes both vision and force. In fact, vision stops/triggers robot operation, in case of danger, while the operator contact forces are used to start and end assembly, and to deform the robot trajectory, for collision avoidance.3. Application: collaborative assembly of a homokinetic jointThe application that is targeted in this work is the collaborative human –robot assembly of a car homokinetic joint. Homokinetic joints allow a drive shaft to transmit power to a second shaft through a variable angle, at constant rotational speed, while lim- iting friction and play. They are used both in front wheel drive cars, and in rear wheel drive cars with independent rear suspension.Speci ?cally, the homokinetic joint that we use in this work is the Rzeppa joint, which was invented in 1926 by Alfred Hans Rzeppa (1885 –1965). The joint works in a manner similar to a bevel gear, with steel balls in place of teeth. The working principle and exploded diagram of this joint are shown in Fig. 1.The joint consists of a spherical star-shaped inner race with six grooves in it, and a similar enveloping outer shell (housing ). Each groove guides one steel spherical ball. The half shaft ?ts in the centre of the inner race, which itself nests inside a circular cage . The cage is spherical but with open ends, and with six openings around the perimeter, one for each ball. Finally, the housing is attached to the second shaft.In this work, we propose a collaborative strategy for the as- sembly of the Rzeppa joint. In fact, after thorough discussions with the PSA ergonomists, and analysis of the most dif?cult cells in the PSA process, it appears that this use-case generates more Mus- culoskeletal Disorders (MSDs) than other assembly cells. This is what motivated the choice of this cell, as case study. In particular, we focus on the insertion of the six balls in the joint. This task is currently done manually, using an insertion tool similar to the one that is shown, along with all the joint components, in the left of Fig. 2. The ball insertion operation that is currently done in the PSA production process is outlined on the right of the same ?gure. The ball order is prede ?ned, since, to limit the mechanical constraints, the ?rst three balls must be inserted, so that there is an empty groove between any two of them.Let us hereby describe the steps of the manual assembly op- eration. Prior to ball insertion (i.e., in the previous work cell), the inner race, cage, and housing have been assembled, and a ?rst ball has been inserted. This ball links the cage and inner race, leavingsome backlash, which will diminish as the other balls are placed. Hence, for simplicity, in the rest of this paragraph, the term cage will be used to refer to the linked grouping of cage and inner race. At ?rst, the operator receives the partially assembled Rzeppa joint, and places it in an ad-hoc support. Then, by tapping on the cage with the insertion tool (see Fig. 2(1)), he slightly inclines it, until the second ball can be inserted (Fig. 2(2)). Since the backlash di- minishes, for inserting the third and following balls, tapping is not suf?cient anymore, and the operator must use the insertion tool to increase the lever arm needed to incline the cage (Fig. 2(3)). The insertion tool is designed so that its lower part ?ts precisely in the cage. The cage inclination (or cage opening ), followed by ball in- sertion is repeated for the remaining balls (Figs. 2(4)– 2(8)). Once all six balls have been placed, the cage is closed (i.e., reset to the ?at position, Fig. 2(9)), so that the Rzeppa joint can be sent to the following cell in the production chain.An experienced operator executes this entire operation very quickly, to guarantee the required production rate of 70 joints per hour. However, the cage opening and closing steps can provoke MSDs. These injuries are due to the forces required for cage po- sitioning, opening, and closing, and to the repetition of such ac- tions. Ergonomists estimate that over an eight hour shift, the op- erator lifts a total weight of approximately 5 tons, and realizes 18 000 upper limb movements [36] . Therefore, this part of the operation should be automated to alleviate the operator. On the other hand, the action of ball insertion by itself requires very high precision and adaptability skills, which are not attainable by pre- sent-day industrial robots. Thus, the characteristics of the Rzeppa joint ball insertion work cell make it an ideal scenario for cobotics research.In practice, one should automate the cell according to the fol- lowing speci ?cations:(1) The human operator must position the ?ve balls in the cor- responding cage openings, with little effort and motion, so that the fatigue and chances of injury are minimized.(2) The physical interaction between human and environment (speci ?cally, between the human hand and the steel parts) must be controlled, to guarantee safety .(3) The physical interaction between insertion tool and environ- ment (speci ?cally, the Rzeppa joint) must be controlled, to avoid blockage.(4) The cobot velocity must guarantee safety, i.e., it must comply with the ISO safety standard cited in Section 2.2 (tool center point velocity limited to 0.25 ms— 1).To ful?l the above requirements, we have redesigned the manufacturing cell, as shown in Fig. 3 (top). The lower part of the pre-assembled Rzeppa joint (composed of the housing, cage, inner race, and ?rst ball), is held by a gripper placed on the end effector of a manipulator robot. The insertion tool, instead, is ?xed to a support that is rigidly constrained to the robot base. In contrast with the manual insertion operation shown in Fig. 2, most of the required movements will be carried out by the robot, with the human intervening only to position the balls. The scenario isFig. 1. Working principle (left) [https://en.wikipedia.org/wiki/Constant-velocity_joint ] and exploded diagram (right) [www.aa1car.com ] of the Rzeppa homokinetic joint.AE EE E4 A. Cherubini et al. / Robotics and Computer-Integrated Manufacturing 40 (2016) 1–13Fig. 2. Manual insertion. Left : concerned components (clockwise from top left: inner race, balls, cage, housing and insertion tool). Right : nine steps of the PSA manual insertion operation; ball 1 does not appear, since, along with the inner race, cage, and housing, it has been assembled before this operation.disconnection, cage opening/closing and pivoting), the robot follows nominal, pre-taught trajectories. These nominal trajectories are de- formed to comply with the external force/torques from the en- vironment and operator, using an admittance controller [30]. To further enhance safety of operation, the robot is stopped in the presence of strong force/torques, and a ?xed camera, mounted on the insertion tool support, monitors the ball insertion. The camera is used both to detect ball insertion, and to stop the robot if it may endanger the operator hands. Since there is no risk of clamping ?ngers in the back of the joint (solid steel of the housing, without openings), the camera only monitors the front of the cage. Images from the camera are shown at the bottom of Fig. 3 (raw image on the left, and processed image on the right).In the following, we provide more details on the control fra- mework developed for collaborative assembly of the Rzeppa joint.4. De?nitions and characteristics of the control framework4.1. De?nitions and assumptionsFig. 3. Top : experimental setup for collaborative Rzeppa joint assembly with: in- sertion tool support (S), end effector (E), and robot base (B) reference frames. Bottom left : raw image from the camera. Bottom right : processed image with: locusThe reference frames used to describe the assembly task are shown in Fig. 3. In this, and in all other ?gures of the paper, the RGB convention is used: axes X, Y and Z are respectively red, green and blue. The frames used in this work are: the robot base ( / B), insertion tool support ( / S ), and end effector ( / E ) frames, with origins respecti vely B, S and E. Refer ence frames / B and / S are ?xed in the world, whereas / E mo ves with the robot. The pose of/ S is determined via an estimation procedure explained in Section5. The pose of a generic frame A in frame / B is de?ned as:of the tool axis in the image (A), distance between hand and tool (dh), and mini- B B B Bmum distance tolerated for safety (dm). (For interpretation of the references to color in this ?gure caption, the reader is referred to the web version of this paper.)perfectly reversed, to make it human-centered: now the Rzeppa joint is displaced around the operator, instead of the opposite. To this end, we control the robot so that it opens/closes the cage – for ball in- sertion – by properly inclining it within the housing, and then pivotspA = [ tA θuA,]? ∈ ??(3), with tA the translation vec tor from B toA, and Bθu the angle/axis vector [37].In this work, we consider a manipulator with j ≥ 6 degrees of freedom (dof), and note q ∈ ?j the robot joint values.We also assume that it is possible to estimate (either through direct, or through joint torque measurement) the wrench (force E f and torque Em ) applied to the end effector, and expressed init around the vertical axis so that the operator is always facing the the end effector frame: E h = [E f , mE]?. This information will benext desired opening. At the start (end) of the operation, the robotend effector raises (descends) to connect (disconnect) insertion tool and inner race. To realize all these motions (tool connection/fed to an admittance controller [30] , to adapt the robot motion,and avoid blockage between the Rzeppa inner race and the in- sertion tool.EE E?qqEEEEEEA. Cherubini et al. / Robotics and Computer-Integrated Manufacturing 40 (2016) 1–13 54.2. Inverse kinematics controllerIn our framework, an inverse kinematics controller is used tothe experiments and is a common assumptions in inverse ki- nematics control [40] .● Λ is a positive de?nite 6-dimensional square diagonal matrixmap end effector poses to the joint space. This is necessary for that determines the convergence rate of Bp to Bp? .initially generating the nominal trajectory, and then for deforming it, according to the admittance controller. We hereby recall the classic inverse kinematics formulation according to [38] .The evolution of the end effector pose can be expressed as:B● The term ▽g = ?g is introduced to minimize the scalar cost function g (q) to perform the secondary task.It is well-known that system (1) , controlled by (2) , is globally asymptotically stable with respect to the pose regulation task.? E = Jq , (1) Indeed, plugging (2) into (1) yields:B Bp Bp? Bp ,with J = ? pE the task Jacobian , of size 6 × j, that is derived from? E = Λ( E ? E) (3) the robot measured con?guration q^ , and q the robot joint and, since Λ is a positive de?nite diagonal matrix, Bp = Bp? is avelocities.The end effector pose can be regulated to a desired value Bp?and, if j 6, redundancy [39] exists. Then, a secondary task can bestable equilibrium for the closed-loop system. Also, note that minimization of g has no effect on the task dynamics.realized, by exploiting the j ? 6 extra degrees of freedom (since 4.3. Control frameworkonly 6 are required by the primary task, Bp? ). The joint velocity qfor driving Bp to Bp? , is generated via: Our framework for collaborative assembly is shown in Fig. 4.q = J?Λ(Bp? ? Bp ) + (I ? J?J)▽g. (2) The framework is made up of two steps (shown respectively at theE E top and bottom of the ?gure): the of?ine nominal trajectory gen-In the above equation: eration step, and the online controller for collaborative assembly. We hereby outline the framework, and will then detail its com-● J? is the j × 6 right pseudoinverse of J, such that JJ? J = J. Weassume that J is full row rank during operation, so that this pseudoinverse can be calculated. This was the case throughoutponents in Sections 5–7.A nominal trajectory 7 , corresponding to the entire assembly operation, is input to the controller. This is de?ned by a series of NFig. 4. Control framework for collaborative assemb