This post originally appeared on VisionaryExcellence.com.
CRIIF and Prime Sense talk about SAMI, the humanoid robot set to change the world.
This Summer, the Centre de Robotique Intégrée d’Ile de France (CRIIF) debuted an incredible invention: SAMI, a humanoid robot that can cook, clean and navigate your living room without running into a single piece of furniture. No, this not a science fiction novel. SAMI is a real prototype, which took six people and $100,000 to create.
Last week, Visionary Excellence was able to get details about this ground-breaking new robot from key players at the two companies who made SAMI possible. First, Rodolphe Hasselvander, executive director, CRIIF, described SAMI’s capabilities in detail, and gave us inside information about when this groundrbeaking technology will be available for sale.
VE: When did you begin work on the SAMI project? How long did the robot take to complete?
RH: We had the idea for SAMI over a year ago in June 2010, and the first thing we did was make a head using 3D printing. We just felt like he needed to exist quickly in some way, so we decided to start there. For over a year, we carried his interactive head around with us to conferences and exhibitions, and the head talked about what SAMI could be. It took about a year from conception to his debut at Future-en-Seine in Paris in June where we unveiled his full form and movements. He is now a mobile robotic platform that could interact with his environment.
VE: How do the 3D sensors work in SAMI? How do users interface with the robot? Teach him new tasks?
RH: The 3D sensors from PrimeSense allow SAMI to have a real perception and understanding of the environment. We obtain a point cloud of the scene /environment and our proprietary algorithms allow SAMI to avoid obstacles and identify key points in the environment, including objects and people. The data that is collected from the sensors allow SAMI to be fully autonomous in the house or specific environment.
We also have another PrimeSense sensor in SAMI able to detect people in its surroundings. SAMI can see where the people are physically and then go help them. This sensor provides a great way to interact with unique movements or behaviors that might be a part of that particular environment. The sensors also enable SAMI to do specific tasks: “pick up that remote control,” “move the box,” and “lift that load.” Because SAMI is an anthropomorphic robot, all human-like movement can be learned and reproduced.
Thanks to the 3D sensors and the internet, SAMI will be able to be fully controlled remotely, and could also become a real avatar for doctors and emergency workers to assist SAMI owners instantly from their offices–bringing medicine, making the bed, helping if someone has fallen, anything a human could do.
VE: When will the prototype be available for sale?
RH: SAMI is a first generation prototype and is being refined continuously. We expect SAMI to be ready for trials and sales by Q4 2013.
VE: What additional functionality do you plan to add to SAMI?
RH: We are planning to develop a brain control system that will allow handicapped people to have a real control over their world. In the future, SAMI will be even more autonomous. The plan is for him to be able to understand, learn and help people who are alone with mobility problems.
SAMI is poised to change the world in many ways, especially assisted living. The critical mechanisms that make SAMI autonomous are 3D cameras that “see” for the robot, which keep it from crashing into objects in its environment and allow it to identify objects to retrieve. The cameras used in this prototype are made by the company Prime Sense, and it’s the same technology that powers Microsoft’s Kinect camera. Tal Dagan, vice president of marketing at Prime Sense, gave Visionary Excellence the inside scoop on the camera that’s going to change your life by changing the way robots navigate your living room, the way you try on clothes and, of course, the way you play video games.
TD: In both cases the companies were set to build service robots that can maneuver in an autonomous way and interact with people and in both cases the PrimeSense technology was chosen to be embedded in the Robots’ modules, so the bottom line is a similar experience. In such examples, PrimeSense offers an end to end solution that consists of the 3D sensors, software drivers, and relevant Middleware, if required. Of course we will also work together with our partners and support them as much as required to embed the solution into an end to end product.
VE: Did the 3D camera need to be modified to work with the robot? How?
TD: In these cases, the PrimeSense sensor provided good response to the robots needs so there was no need for modifications on our part. In many other cases, in which our customers require specific needs, such as the sensor’s resolution, field of view, working distance, physical dimensions, or any other parameter, we work with our partners to adapt our system and provide the specific solution required.
VE: What are the current specs for the camera, including lag time and resolution?
TD: The sensor’s spec can be found at full on our website, www.primesense.com. In general it’s a VGA sensor, the field of view is 58Horizontal, 45Vertical, spatial resolution 3mm (at 2meters), maximum frame rate of 60 FPS, and average latency in VGA 40 ms. An important note is that the systems can be configured for specific needs, so these are just the basic initial specifications.
VE: Obviously you have a huge foothold in entertainment and personal computing. What other industries need your technology but are not aware of its applications? Medicine? Education? Security?
TD: PrimeSense’s vision from the company launch was to provide devices and machines with the gift of sight and by doing so enable people and devices to interact in a natural way, similar to interaction between people. We started in the gaming and living room space, and together with the largest partners in the industry we have assisted in revolutionizing this space. But we see that our technology is relevant, and can really make a difference in so many industries. In some cases it’s enabling new applications that weren’t possible before and in others it will enable existing applications to reach mass market as it lowers the costs of a 3D solutions from several thousands of dollars to sub $200. Some of these examples are in the area of robotics, interactive displays in digital signage, analytics and security applications in retail, a variety of applications in Medical for rehabilitation and monitoring, and many more. We already have many partners working in all these fields and shortly we will see many new applications being launched.
VE: Any plans to make 3D projectors for holograms?
TD: I am not aware of such, but I must say that we are not aware of ALL the applications being developed with our technology. Today whoever wants to develop 3D applications can go to theOpenNI.org website, download the open source SDK and middleware, and start immediately. As the development process has become so simple, and with the technology becoming mass market price, we see tens of thousands of developers downloading the SDK each month, and developing numerous new applications for a variety of markets. In many cases these are markets that we were not even aware of, but the implications are nothing short of amazing. Again and again we see new applications and use cases that are changing their markets.
Do you have any questions about how SAMI was developed? Have you built a humanoid robot?