Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board

2019,  1 (3):   316 - 329

Published Date：2019-6-20 DOI: 10.3724/SP.J.2096-5796.2019.0019

Abstract

Background
This paper introduces a versatile edutainment platform based on a swarm robotics system that can support multiple interaction methods. We aim to create a re-usable open-ended tangible tool for a variety of educational and entertainment scenarios by utilizing the unique advantages of swarm robots such as flexible mobility, mutual perception, and free control of robot number.
Methods
Compared with the tangible user interface, the swarm user interface (SUI) possesses more flexible locomotion and more controllable widgets. However, research on SUI is still limited to system construction, and the upper interaction modes along with vivid applications have not been sufficiently studied.
Results
This study illustrates possible interaction modes for swarm robotics and feasible application scenarios based on these fundamental interaction modes. We also discuss the implementation of swarm robotics (including software and hardware), then design several simple experiments to verify the location accuracy of the swarm robotics system.

Content

1 Introduction
The swarm robotics platform enables the possibility of a new human-machine interface. This platform consists of multiple automated robots capable of handling content presentation and interaction processes. Le Goc et al. define the swarm user interface as “A human-machine interface consisting of independent self-propelled units, where each unit can move together and react to user input.”[1] Compared to platforms that consist of a single robot, or only perform a single task, swarm robotics provides a platform that is more powerful, cheaper, simpler, and more reliable when performing specific tasks[2]. The aim of this study is to design and develop a re-usable swarm robotics platform, maximize its features and advantages, and apply it to a variety of educational and entertainment scenarios.
The main work of this study includes the following: (1) developing and designing a swarm robotics platform, and explaining the system implementation method of swarm robots; (2) designing an intelligent interaction mode based on this platform, and exploring the possible application scenarios of these interaction modes. Based on these activities, the aims of this study are as follows: (1) Supplement the interactive modes for SUI in the interactive mode; (2) Explore the application effect of SUI in narrative and physical game scenarios.
The rest of this paper is arranged as follows: Section 2 introduces the existing intelligent interactive technology and swarm robotics research; the third section introduces the interaction mode design and corresponding application scenarios on the platform; the fourth section introduces the specific implementation scheme and technology implementation of the robots. The fifth section introduces the experimental design to verify the positioning accuracy; the sixth section is the summary and future research directions.
2 Related works
2.1　Intelligent interaction
Intelligent interaction refers to the ability of a machine to understand human intention automatically, as a result of which interaction efficiency and user experience is improved through the provision of more natural human-human/human-machine interaction methods. From the perspective of interactive modality, intelligent interaction can be divided into five typical types: voice, sign/body language, visual image, tactile, and odor. From the perspective of interactive media, intelligent interaction consists of six main devices: mobile phone, personal computer (PC), head-mounted display (HMD), touch screen, tangible user interface (TUI), and robot. Although the intelligent interaction design for robot is relatively difficult, it is really worth exploring, because there are dozens of robotic platforms, and the robot itself is a kind of complex equipment with independent intelligence.
There are many types of intelligent interaction for robots. For example, intelligent voice interaction on the basis of natural language processing (NLP) can be deployed in social robotics to enhance children's understanding of scientific concepts through questions and answers (Q&A) and voice feedback for encouragement[3]. Furthermore, gesture/posture interactions based on gesture/posture recognition, in combination with voice and visual interactions, can dramatically promote interaction efficiency. Currently, most research on human-robot interaction (HRI) focuses on how to improve the efficiency and accuracy of existing HRI methods, or fuse multimodals better. However, the design of robotic behavior has been rarely explored. Moreover, scant attention has been paid the realization of interacting with multiple robots at the same time.
2.2　Swarm robot
Many laboratories are engaged in constructing unique small swarm robotics platforms such as tabletop swarm robots Zooids[1] (from Stanford University, published in UIST, 2016) and haptic Cellulo[4] (from EPFL, published in HRI 2017). The biggest difference between the two is their localization principles; Zooids relies on a high frame rate projector for position tracking, while Cellulo uses papers with special codes that can only be read by an infrared camera to achieve localization.
On the other hand, their application scenarios have different emphases. Due to localization limitation, the main purpose of Zooids is to create a novel desktop user interface (Swarm UI). Every robot can be an independent handle or a group of robots assembled to accomplish a simple mission such as message notification, information display, and object manipulation. The research on Cellulo focuses on constructing a new kind of educational tools for common use that supports function customization. Due to its delicate localization method, Cellulo is easy to transfer and adapts easily to various environmental conditions. For example, it can be used for the dynamic presentation of abstract concepts, such as wind field simulation[5]; as teaching aids such as writing instruction[6]; and in sports game, as an aid in the rehabilitation training for upper limbs. Beyond virtual concepts or industrial applications, swarm robotics is increasingly extending into daily life.
In the field of HCI, research on small swarm robotics mainly focuses on three aspects: the swarm user interface (SUI), user recognition of the robot behaviors, and educational tools. The SUI is a further evolution of the TUI. Compared with TUIs, widgets in SUI are not only tools for changing digital contents, they are also presenters. For instance, in Zooids[1] and Reactile[7], Multiple entities proactively display information in a self-organized manner and in accordance with established rules. The research on the behavior recognition of swarm robotics focuses on exploring users' emotional responses when robots move in different modes, at different speeds, and demonstrate diverse motion relations. This would provide the theoretical basis for the interaction design of the platform. For example, if swarm robots are expected to convey an urgent message, it is necessary to design the motion pattern to simulate the tension and anxiety of the users to improve the transmission efficiency of the message. Educational tools aim to maximize the excellent characteristics of swarm robots, such as autonomous mobility, self-organization, tangible interaction, and immediate feedback, to change traditional teaching methods, and better assist children to learn and understand complex scientific conceptions. Cellulo is a typical example of endeavors to utilize swarm robots as novel teaching aids. However, Cellulo only proposes numerous educational applications. The interaction methods of the swarm robotics system and many other application scenarios have not been elaborated upon.
3 Interaction design
3.1　Interaction style
The swarm robotics system can move automatically, support tactile feedback, and be directly manipulated; consequently, it has larger interaction space and numerous interaction modes. We divide the interaction modes of the swarm system into four categories: interaction feedback, activity-based interactions, group interactions, and interaction commands. We also illustrate how different interaction modes are applied to related scenarios. However, this does not imply that the methods of interacting with swarm robots are limited to these categories. Our aim is to clarify the basic interaction methods applicable to most scenarios. It is convenient for developers or designers to customize appropriate interaction modes for their own needs.
3.1.1 　 Interaction feedback
(1)　Direct regroup
One of the most important characteristics of the swarm system is self-organization; when the number of individuals in a group changes, the others can quickly adapt and form new structures/communities. Therefore, when the user puts in or takes away one or more individuals in the swarm robotics, the remaining robots will quickly build a new system to ensure the stability and continuity of the platform. For example, if one robot is taken out when multiple robots are performing the same task, for instance, information presentation or pattern display, the remaining robots will re-plan the motion path, and continue the task.
(2)　Direct manipulation
Each robot in the swarm is an independent handle that can be interacted with directly. The user moves any robot to a specified location, and the individual information in the network will be updated synchronously. At the same time, modules such as gesture and voice recognition can also be added to enable real-time interaction with the moving robot.
(3)　Haptic feedback
The robots provide corresponding force/motion feedback as the user directly interacts with them by controlling their direction or speed. For example, when the robot moves to the border of the map, if the user is still trying to move the small robot away from the border, the robot will generate reverse speed to hinder the user's behavior. At the same time, varying the speed will provide users with different tactile perceptions (Figure 1).
3.1.2 　 Activity-based interactions
(1)　Event trigger
Event trigger refers to the occurrence of a preset event when the robot moves to a specific scene or multiple robots exist in a certain relationship. For example, in a chess game, each robot will play a different role, assuming that A is the KING. On the board, if A shall be checkmated, A will actively switch to the state of death. When swarm robotics is used in multi-threaded interactive narratives, different user interactions will result in different robot motion modes, activating the storyline to evolve towards diverse directions (Figure 2).
(2)　Close interactions
Another feature of swarms is mutual awareness and communication. Thus, each robot knows the location and activity of other individuals in the group to enable it to interact with other robots within a certain range. For example, when the swarm robots simulate a kind of chasing relationship between animals, one robot plays the role of the prey, while another is the hunter. Once the prey passes near the hunter, the hunter will follow or chase the prey, and force the prey to stop.
3.1.3 　 Group interactions
By controlling one robot, the user can manipulate the motion of a group of robots without having to operate all the robots individually. Consequently, the number of tangible widgets that can interact at the same time will not be limited to the number of users. For example, in the application example of Reactile[8], an individual widget is used to adjust the value range of the independent variable, and other widgets are discrete points on the function curve. As the value range of the independent variable changes (the widget changes its position), the shape of the function curve changes accordingly (other individuals move with it to maintain consistency).
3.1.4 　 Interaction commands
(1) Passive orchestration
The swarm robotics system is a tangible interactive platform that can be programmed. Widgets will vary their motion according to the user commands. For example, if the swarm robots are employed for basketball tactical simulation, the user can set different moving positions for each robot on the graphical user interface (GUI), and each robot will move towards its own destination according to the specified path. In the entire process, all the trajectories of the robots are pre-set by the user (Figure 3).
3.2　Usage scenario
3.2.1 　 Storytelling
Storytelling is one of the most important abilities for children. There is a lot of research work that aims to use new technologies to create more interesting and efficient tools to help children develop storytelling skills. Among the new narrative tools, the TUI is distinguished by the excellent immersive experience and direct feedback it offers. However, the SUI is poised to become a novel interactive storytelling tool because it provides greater maneuvering space, more flexible mobility, and more comprehensive interactive feedback than the TUI (Figure 4).
(1)　Swarms in storytelling
Compared to the TUI, using swarm robots for children's narrative creation offers the following advantages: 1) Each robot can become a character in the story/an element that influences the development of the story; and the number of entities controlled at the same time can be flexibly changed. 2) The identity and behavior of each individual robot in the swarm system can be repeatedly defined; thus, the user can create the story as desired, without being restricted by the widget itself. 3) The swarm platform's mobile capabilities and richer interaction modes make the evolution of a story more interesting and vivid, thus, further promoting user’s creation.
(2)　Passive performance
The passive narrative tool based on swarm robots involves the user creating a story through the GUI. The central processor, such as the computer and PAD, will transform the upper story logic into the robot behavior logic. After receiving the instruction, the swarm robot begins to perform the story. This method is referred to as passive performance because the robot commences action entirely with reference to the user's instructions.
(3)　Evolutive storytelling
In evolutive storytelling, the robot itself possesses certain autonomous action logic. There is a relationship-based interaction mode between robots. For example, when A and B (two robots’ characters) are in a “hostile” relationship (upper level logic definition), in the process of story performance, A and B will always maintain a certain distance or thwart each other's movements (lower motion mode expression). Thus, in the creation process, an unexpected storyline may arise, and the user can follow up on the new story. Because the storyline is unpredictable and is constantly evolving, users can create unlimited stories.
3.2.2 　 Entity games
Entity games are an application area of physical interaction technology. It combines the advantages of traditional screen displays with the flexibility of physical avatars, allowing people to share and interact with each other through direct touch. However, traditional physical game design often does not offer the ability of active movement; therefore, applying SUI to physical games can bring more possibilities.
When designing a physical game using SUI, the game types can be roughly divided into four categories: single player games, multiplayer local games, multiplayer online games, and non-traditional types of games.
The design of the single-player game is based on traditional game classification. Through the sound and light effects of the robot itself and the background environment, the traditional game can be expressed in another way. While the design of the multiplayer local game is focused more on collaboration or competition, the design of the multiplayer online game is more preoccupied with using physical avatars (i.e., swarm robots) to represent the state of the player who is not present. In addition, unlike traditional games, physical games that maximize the characteristics of swarm robots are also worth exploring. More importantly, physical games that can be performed using swarm robots will introduce innovations into the design of physical games, and rejuvenate them.
4 Implementation
4.1　Overview
Our platform consists primarily of a small group of low-cost tabletop robots that move on paper maps, and includes a GUI on a mobile or desktop device. The map has a set of special microcodes for precise positioning, and the code map is printed by the general CMYK method. The robot is a simple, small, mobile touchable interactive device with robust motion and ease of arrangement. Directly manipulating or interacting with the equipped software system on desktop devices completes the swarm system’s motion arrangement and performance.
Currently, the implementation of our swarm robotics can be disassembled into three parts: drive structure with motion control algorithm, positioning system, and instruction and data transmission. The principle and implementation of each part will be explained in detail below (Figure 5).
4.2　Motion control
4.2.1 　 Hardware
To maintain the smooth and rapid movement of the robot, we chose three spherical three-wheel drives. Each wheel is made of steel-coated adhesive with a diameter of 22mm. Each wheel is driven by a miniature geared motor, an ASLONG JGA12-N20, size of 35mm×12mm×10mm with a maximum speed of 1500r/min. We fixed the permanent magnet on the motor shaft so that the motor could propel the ball wheel. The motor units of the three wheels are placed in a triangle, as shown in Figure 6.
This design enables the robot to move in any direction on the plane, and swiftly change direction. Furthermore, when the robot motion is changed from motor-driven to hand-held, this design can better circumvent wear and tear of the wheel. The hardware structure design of the robot base is based on the work of Cellulo[9,10].
The BOM of one robot
Category Type Number Unit (RMB) Total (RMB)
Housing 3D Printing 1 150 150
N20 Motor 1500 rpm 3 15 45
Ring Magnet 10 mm×3 mm×3.5 mm Aperture 9 0.7 6.3
Friction Gear 3 mm 18 0.053 0.95
Localization Module OID Module Sensor 1 20 20
Ball Wheel 22 mm 3 9 27
Printed Circuit Board Customized 1 80 80
EEPROM Plates Customized 1 12 12
Lithium Battery 7.4v 1000 mah 1 40 40
Total 381.25
4.2.2 　 Motion control
The control algorithm of the robot’s motion is divided into two parts. First, the speed decomposition algorithm of the three motors ensures that the robot can stabilize its linear motion in the open loop state. Second, combined with the global motion control of the positioning system, the robot can be stably moved to the expected position in the closed loop state.
In the open loop state, to map the total moving speed of the robot to the rotational speed of the three motors, it is first necessary to speed-decompose the total speed. As shown in the Figure 7, the robot coordinate system, XOY, is established, and the motion directions of the three wheels are 0°, 120°, and 240°, respectively, with the positive direction of the robot. Let v_X, v_Y denote the velocity component of the robot's own speed, v, in the XOY coordinate system, X- and Y-axes; v_1, v_2, and v_3 represent the linear velocity of the three wheels, L represents the distance from the center point to the wheel, and omega represents the angular velocity of the robot. The speed of the three wheels can be expressed as the following formula:
$v 1 v 2 v 3 = s i n 60 ° c o s 60 ° L - s i n 60 ° c o s 60 ° L 0 - 1 L v x v y ω$
Global motion control is required during the actual motion of the robot to ensure smooth and precise movement to the target position. As shown in the Figure 8, the positioning information system is first used to acquire the global position information and angle information of the robot. Then, the global velocity vector of the robot is calculated, after which the global motion velocity vector of the robot is calculated by comparing the target position and performing a proportional-integral-derivative (PID) controller algorithm. Finally, the robot's motion velocity is obtained by global-local mapping, and it is mapped to the rotational speed of the three motors by velocity decomposition.
4.3　Localization
The positioning system is mainly composed of the following three parts: hardware module, data transmission, and path planning. The packaged positioning module completes the collection of the position data; the microcontroller unit (MCU) transmits the data to the central controller (CCU) (generally a PC or PAD) through the wireless transmission module. Then, the position information of the plurality of robots will be comprehensively processed by the CCU. Global path planning is performed according to the tasks of each robot, and the destination information and motion modes are transmitted back to the corresponding robot. The small robot itself performs independent motion calculation and motion control (Figure 9).
4.3.1 　 Localization principle
The essential technology used in the positioning system is based on the optical identification (OID) technology of special paper codes. Combining the optical principle and the special invisible printed codes, the module can convert the absolute position of the paper into digital coordinates. Each OID-encoded pattern is composed of a number of subtle and inconspicuous code points in accordance with specific rules, and each set of code points corresponds to a specific set of values. The biggest distinction from other OID codes is that the minimized bottom code not only has the characteristics of confidentiality and low visual interference but can also hide beneath printed color patterns. The OID localization method is a low-cost solution that can be achieved by printing with ordinary ink. This technology does not limit the usage scenarios of the platform, and the corresponding code map can be customized according to the specific application.
The optical recognition technology utilized by OID is the data hiding & retrieving technology (DHRT). According to the characteristics of general printing inks that stipulates different levels of light absorption at specific wavelengths, OID coded patterns can be hidden in various printed color graphics. For example, the commonly used four-version OID printing, three editions of color printing plus a version of the OID bottom code printing, and the K version of the printing is mixed into the C, M, and Y version, respectively (the ink for codes is carbon-containing, normal printing uses carbon-free inks). This method is easy to operate, and incurs no additional expenses.
4.3.2 　 Localization module
The positioning module we employ is an integrated system-on-chip module that integrates the image sensor and image decoder developed for OID applications. The number of supported OID coding groups can be as high as 268435456, and the single-group size is 1.35mm×1.37mm. The recognition accuracy reaches 1/128 of one set of code points. The maximum supportable recognition range is about 20. With a visual hit rate of 98%, a modular sensor design, the reading angle can be up to ±45° vertical, and the working illumination is 0－10000Lux. It can efficiently read OID codes, even in the presence of outdoor sunlight (>5000Lux), as the reading error rate is less than 0.1%. The vertical distance between the sensor and the paper should exceed 7mm during operation. In terms of electrical characteristics, the system consumes only 6 mA during operation, 1mA during standby, and the operating voltage is between 2.8 and 3.6V, which greatly extends the module's operating time. The module's operating temperature range is 0－55°C.
The module supports a two-wire communication interface. By controlling the levels of the general I/O pins, the location data can be read with the transmitted command, and the processor can connect to other sensor units to support various applications.
The optical codes editing and generation is supported by specialized software. The image with the background can be loaded, and then it will automatically generate a file with optical codes. The file should be printed using a 1200 dpi laser printer (Figure 10).
4.4　Communication
The local swarm robot platform uses a centralized communication network system and a ZigBee low-power LAN communication protocol. The ZigBee's biggest feature is that it supports self-organizing network and dynamic routing, in addition to accommodating a large number of network nodes (up to 65000 nodes), thus facilitating flexible expansion of the number of robots. On the other hand, when the platform needs to be transferred, it is not necessary to repeat network and routing configuration; thus, learning cost is reduced, and it is easier for users to use.
We use the XBee wireless communication module for each robot to communicate with the central system. The robot sends its own position coordinates to the central system. The system determines the current state of the robot by comparing the coordinates with the pre-stored map information, and sends any event command that may be triggered to the robot. In addition to the comparison with map information, the central control system is also required to pay attention to the state between multiple robots. When an associated event occurs between multiple robots, the system is required to send instructions to the different robots separately (Figure 11).
5 Platform test
To verify the stability and usability of the platform, we designed some experiments. These experiments mainly verified the robot's athletic ability and localization performance.
5.1　Maximum speed / minimum speed
To avoid the impact of the load on the speed of the robot, the tests of maximum speed and minimum speed are completed under no-load conditions.
In the maximum speed test experiment, two of the three wheels of the robot were given the maximum speed in the opposite direction. In the experiment, the speed fluctuated between 91mm/s and 400mm/s, and the median speed was 274mm/s.
The purpose of the minimum speed test is to explore the critical value of the motor speed, that is, the critical state of the robot in relation to the motor speed. The experiment verified that when the pulse width modulation (PWM) value of the motor is 44, the robot is in a critical state, and this speed value is 17% of the maximum value.
5.2　Localization accuracy
The positioning accuracy experiment mainly verifies the number of frames of the positioning system and the frame loss/frame error rate.
5.2.1 　 Static accuracy
The experimental environment for static testing is placing the robot on the map, and recording the positioning system readings to determine the frame rate and error rate. There were five groups of experiments, each lasting 20s. The data is organized as follows:
The static recognition frame rate of the positioning system range is 5.08－5.40 Hz, and the average value is 5.18Hz.
As Figure 12 shows, the average accuracy of the static recognition of the positioning system is 98.2%, and two of the five data sets achieve 100% accuracy.
5.2.2 　 Dynamic accuracy
During the movement of the robot, the positioning system may have a positioning error caused by image blur. To verify the recognition frame rate and accuracy of the positioning system during this process, we designed a dynamic test. To ensure the smooth running of the robot on the paper during dynamic testing, the robot speed is set at 1/4 of the maximum speed. Six groups of tests were performed in the experiment, and each group of experiments lasted 10 s. The time stamp and coordinates during the movement of the robot are recorded and organized as follows:
The time interval between two valid frames is also unstable. As shown in Figure 13, the time intervals between two valid frames are mostly concentrated around 0.2s, while the intervals between some frames are longer. With 1s as the time threshold for getting accurate data, it can be found that the robot can obtain the localization information at least once in 1s in every set of experiments.
The robot does not move at a fixed speed. Figure 14 shows the speed of the movement of the robot within two adjacent frames. For the convenience of data sorting, the unit of the ordinate is the code point coordinate, rather than the actual distance. It can be seen from the figure that the speed of the robot is concentrated in the range of 0－60 coordinates/s, and many abnormal points are not shown in the figure. The speed of these anomalies is often very high, due to the identification error of the positioning system.
Take 60 coordinates/s as the threshold of the robot’s speed. Above this speed is regarded as the positioning system identification error. Thus, the effective recognition frame rate of the positioning system at this speed can be calculated. As shown in Figure 15, the average recognition frame rate of the positioning system in the dynamic test was reduced from 5.18Hz of the static test to 2.93Hz.
5.3　Discussion and future research directions
Based on the experiments, it may be inferred that the maximum speed of the robot has a very high threshold. Thus, stability is difficult to guarantee. The positioning system can achieve high stability and accuracy under static conditions, and it is prone to problems such as recognition errors or frame loss under dynamic conditions. It is necessary for future research to focus on these concerns.
6 Conclusion
It is apparent that swarm robots have more potential application scenarios in the field of education and entertainment that have not yet been explored. This study only classifies the basic interaction modes of the swarm robotics system; however, this is not to imply that the interaction mode for swarm robotics is limited to these seven types. Users can combine these interaction methods together, or create new interaction forms according to the application scenario. The platform mainly adopts the paper positioning technology based on the OID optical identification code instead of the projection positioning, which makes the swarm system have strong portability. The emergence of a universal swarm robotics platform relieves users of the burden of implementing the underlying technology, thereby leaving them to focus on the task of application design. However, current performance evaluation of our swarm robotics has only concentrated on the athletic potentials. Other aspects such as system stability, communication delay, and path planning have not been verified. We will focus on these in future experiments. Finally, it is also imperative to explore more application scenarios based on swarm robots.

Reference

1.

le Goc M, Kim L H, Parsaei A, Fekete J D, Dragicevic P, Follmer S. Zooids: building blocks for swarm user interfaces. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, ACM, 2016, 97–109 DOI:10.1145/2984511.2984547

2.

Gbenga D E, Ramlan E I. Understanding the limitations of particle swarm algorithm for dynamic optimization tasks. ACM Computing Surveys 2016, 49(1): 1–25 DOI:10.1145/2906150

3.

Komatsubara T, Shiomi M, Kanda T, Ishiguro H, Hagita N. Can a social robot help children's understanding of science in classrooms? In: Proceedings of the second international conference on Human-agent interaction.Tsukuba, Japan, ACM, 2014, 83–90 DOI:10.1145/2658861.2658881

4.

Özgür A, Lemaignan S, Johal W, Beltran M, Briod M, Pereyre L, Mondada F, Dillenbourg P. Cellulo. In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interactio. Vienna, Austria, ACM, 2017, 119–127 DOI:10.1145/2909824.3020247

5.

Özgür, A, Johal W, Mondada F, Dillenbourg P. Windfield: demonstrating wind meteorology with handheld haptic robots. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2017, 48–49 DOI:10.1145/3029798.3036664

6.

Asselborn T, Guneysu A, Mrini K, Yadollahi E, Ozgur A, Johal W, Dillenbourg P. Bringing letters to life: handwriting with haptic-enabled tangible robots. In: Proceedings of the 17th ACM Conference on Interaction Design and Children. Trondheim, Norway, ACM, 2018, 219–230

7.

Guneysu Ozgur A, Wessel M J, Johal W, Sharma K, Özgür A, Vuadens P, Mondada F, Hummel F C, Dillenbourg P. Iterative design of an upper limb rehabilitation game with tangible robots. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. Chicago, IL, USA, ACM, 2018, 241–250 DOI:10.1145/3171221.3171262

8.

Suzuki R, Kato J, Gross M D, Yeh T. Reactile: Programming Swarm User Interfaces through Direct Physical Manipulation. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Montreal QC, Canada, ACM, 2018, 1–13 DOI:10.1145/3173574.3173773

9.

Kim L H, Follmer S. UbiSwarm: Ubiquitous Robotic Interfaces and Investigation of Abstract Motion as a Display. ACM Interact. Mob. Wearable Ubiquitous Technol. 2017, 1(3): 1–20 DOI:10.1145/313093

10.

Lee S W. Automatic gesture recognition for intelligent human-robot interaction. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06). Southampton, UK, IEEE, 2006, 645–650 DOI:10.1109/fgr.2006.25