Brown University scientists use VR to control robots and open source project code

On January 17, 2018, although automated robots were able to perform their tasks independently and brilliantly, there were still situations in which human intervention and control were required. New software developed by Brown University computer scientists can support users to remotely control robots through virtual reality, which will help users immerse themselves in the robot's environment.

The software connects the robot's arms and grippers, as well as the onboard camera and sensors to the virtual reality hardware. With the handheld controller, the user can control the position of the robot arm and perform complex operations.

The user can step into the metal skin of the robot to get a first-person view of the environment, or survey the scene from a third-person perspective. You can switch freely and look for the most convenient way to complete the task. And the data transmitted between the robot and the virtual reality device is compact enough to be sent over the Internet with minimal delay to allow the user to remotely control the robot.

Brown University graduate David Whitney, co-director of the system, said: "We think this can be used for situations that require ingenious operation and that humans should not be on the scene. The three specific use cases we are thinking about are bomb disposal, in Complete the mission within the damaged nuclear facility or operate the robotic arm in the International Space Station."

Other heads of the project include Eric Rosen, an undergraduate at Brown University. Both of them work in the Humans to Robots lab, which is led by Stefanie Tellex, a computer science assistant professor. It is worth mentioning that they presented a paper describing the system and assessing its usefulness at the International Robotics Symposium held in Chile this week.

Even highly complex robots often use some fairly simple means for remote control, usually keyboards or devices such as gamepads and two-dimensional displays. Whitney and Rosen point out that this is suitable for driving a wheeled robot or a drone, but may not be suitable for more complex tasks.

Whitney said: "For machines that contain multiple degrees of freedom like robotic arms, keyboards and gamepads are not very intuitive." They point out that mapping a three-dimensional environment to a two-dimensional screen limits the perception of the space the robot is in.

Whitney and Rosen believe that virtual reality may provide more intuitive and immersive options. Their ideas are similar to the previous reports on the Toyota experiment by Vision.

Extended reading: full body motion mapping, Toyota remote control robot with HTC Vive

The software uses robotic sensors to create a point cloud model of the robot itself and its environment, and then transmits it to a remote computer connected to the Vive. The user can perceive the corresponding space through the head display and perform virtual walking inside. At the same time, the user can also watch real-time high-definition video through the camera on the wrist of the robot to get a detailed view of the operation task.

Researchers say they can create an immersive experience for their users while keeping a small enough data load to transmit over the Internet. For example, a user in Providence can remotely operate a robot in Cambridge, Massachusetts (41 miles apart), allowing the robot to stack a series of plastic cups.

In another study, 18 novice users completed stacking operations in virtual reality by 66% faster than traditional keyboard display interfaces. Users also said they prefer virtual interfaces, and they think it's easier to operate than keyboard and display interfaces.

Rosen believes that the speed of task execution is due to the intuitive nature of the virtual reality interface. Rosen said: "In a VR environment, people can move their robots like they move their bodies, so they don't have to think about it when they operate. This allows people to focus on the problems or tasks at hand, without spending more. Time to learn how to move the robot."

The researchers plan to further develop the system. The first iteration focuses on a fairly simple operational task (the robot remains fixed), and they want to try more complex tasks, such as adding navigation movements based on manipulation. They also want to try to mix automatic modes, such as the robot itself completing some tasks, while the user takes over other tasks.

Chicken For Pet

DongGuan Lucky Pet Products Co., Ltd. , https://www.dgpetproduct.com