The video provides good info, both regarding the way the products function and regarding their commercial status. Highly recommended! More about the hair-washing robot: A nice image of the way the bed transforms into a chair (it ‘undocks’):
Here is an interesting new robotic surgery assistant: the ARTAS™ System. It was recently cleared by the FDA (read more). ARTAS apparently helps with something called ‘hair follicle harvesting’, according
The ARTAS System (Source: Restoration Robotics, Inc.)
The procedure is as follows. The client first sits in the Artas chair, and then his hair is millimetered. Then, a robotic arm equipped with a camera initiates ‘small dermal punches’ and harvests individual follicles. This is under the control of a doctor. The follicles, which are later transplanted by hand, will start producing their own hair over months.
, like strip harvesting where a strip of skin with hair is transplanted to a balding area. The company expects to reach extraction rates to 750 to 1,000 follicular units per hour. In addition, it may require fewer staff (although robot support engineers should probably be on standby).
A very interesting, ‘social robotics’ study recently gave us all some nice resultats. Apparently people do not always appreciate being touched by a robot. Somewhat surprisingly the results from one of the experiments showed that it matters why the robot touches you. If people think (because they are told) that it is because they will be washed then that is okay, but if the robot touched someone to comfort them then they found it much less agreeable, even though the touch movement was exactly the same. Apparently instrumental touching is more acceptable than social touching. And the perceived intention is what matters, according
to one the researchers, Charlie Kemp. However, if you compare this result to the positive responses generally reported with the huggable robot Paro, then I think that this result may depend to a large extent on the actual appearance and exact behaviour of the robot. In this case the apperance and behaviour of the robot, Cody, may have created a mismatch with an intention to provide a comforting touch. In other words, the robot does not look like or act like it is designed to provide a comforting touch, it looks like it is designed to clean people (which is exactly what it was designed for). In addition, the results showed that people did not like it if the robot announced that it was going to touch them, perhaps, as indicated by the researchers, because the voice startled them. Here, I think it is very important how a robot speaks exactly. If it speaks with a moving mouth and facial gestures, then this comes across as if the voice is coming from the robot. If a robot has a face and mouth that are able to ‘speak’ then people may actually expect a voice. But, if a robot speaks ‘out of nowhere’, for example if it merely plays a soundbite through a speaker, then this
can easily startle people. It is a disembodied voice. So, again I think that follow-up experiments should be done to provide more conclusive results (as also suggested by the researchers). In a way this resembles the previous critical remark: the robot does not look like it was designed to talk to people so it may come across as a mismatch if it does talk. But, all in all, this sort of research is very useful and more of it is needed to support the succesful introduction of healthcare robotics. The original paper, presented at the HRI 2011 conference can be downloaded here Science News Blog wrote a decent summary as well: Study Investigates How People Respond to Being Touched by a Robot.
for Papers is out (the CFP page, and here the Full CFP Details). De tijdslijn is als volgt: Paper submission: June 1st Notification of acceptance: August 1st Final manuscript submission: September 7th Conference: November 24-25
The International Conference on Social Robotics brings researchers
and practitioners together to report on and discuss the state-of-the-art research in the field of social robotics. The conference focuses particularly on social interaction between humans and robots, the integration of robots into our society, and the design of next generation social robot interfaces and systems. The theme of the 2011 conference is “Alive!” It expresses the vitality of the social
robotics research, paying particular attention to the development of robots that appear increasingly social — the point that people perceive them to be alive. The conference aims to foster discussion on the development of computational models, robotic embodiments, and behavior that enable robots to act socially and the impact that social robots have on people and their social and physical environment.
lot of work studying how robots might benefit children with autism, is quoted as saying:
“Children with autism don’t react well to people because they don’t understand facial expressions,” said Ben Robins, a senior research fellow in computer science at the University of Hertfordshire who specializes in working with autistic children. “Robots are much safer for them because there’s less for them to interpret and they are very predictable.”
The article neatly decribes the current state of the science behind the idea that social robots can help autistic children to learn and train certain social
work with Kaspar (which has been ongoing since 2005). I also found a nice BBC video from 2008 about Kaspar and Robins and others’ work: And there is a long, Japanese documentary about Kaspar and the work of Robins et al. For those with a mind for reading, check out papers on the work with Kaspar AND Robins, or you can browse Robins’ extensive publication list.
Recently, on October 25, Jeroen spoke at a workshop about healthcare robotics. It was organised bij Kennisalliantie and Syntens, who wish to set things in motion, especially in the Dutch ‘Medical Delta’ (roughly Rotterdam-Delft-Leiden). Prof. Luc de Witte opened the day, followed by Boudewijn Wisse, and finally Jeroen Arendsen. In the afternoon the discussion was continued in groups. The video gives a good impression of the day. For Robots that Care the initial contact
ScienceDaily (Feb. 3, 2011) — Surgeons of the future might use a system that recognizes hand gestures as commands to control a robotic scrub nurse or tell a computer to display medical images of the patient during
Purdue industrial engineering graduate student Mithun Jacob uses a prototype robotic scrub nurse with graduate student Yu-Ting Li. Researchers are developing a system that recognizes hand gestures to control the robot or tell a computer to display medical images of the patient during an operation. (Credit: Purdue University photo/Mark Simons)
I believe, partly based the insightful comments given at Nursing Advocacy, that it is not a good idea to replace human scrub nurses with robotic ones. It would be much better to solve the shortage of nurses). In this case, there is so much a scrub nurse does, so much more than a robot can do, for at least the next fifty years or so.
The Dutch website Zorgvisie held a poll about: ‘More robot helpers by the bed. An improvement for healthcare or not?’. The result: 73 procent of the 371 responding visitors of Zorgvisie.nl felt it was not an improvement. But what does this mean exactly? It pays to have a look at the cause of the poll, which was news (i.e. on Zorgvisie) about a German ‘Helper Robot’, the Casero (see picture). There are more of such robots under development (e.g. Care-o-Bot). One could describe them as driving carts, with clever bits and pieces, that can serve drinks and food for example.
The Casero Helper Robot (source: Zorgvisie)
According to the researchers developing him the idea behind Casero is (source: Robotics Wire):
When the Duisburg researchers observed the care workers’ daily routines and tasks, they soon noticed that there was seldom
enough time to exchange a few kind words with patients. Staff shortages were everywhere. While the robots run errands and allow games to be played on their displays, care workers could devote more of their time to caring for the elderly.
Well, that line of reasoning is interesting, but illogical if buying an maintaining robots is as expensive as hiring people. Then it should be seen as replacement. For Zorgvisie also report:
The robot is hardly cheap. “Casero is as expensive as a fulltime hire”, says Volker Bessler of the care home in Stuttgart where the first service robot was tested.
In this light, it is understandable that most people rejected the idea that it constitutes an improvement. And then
At CES 2011, look who’s there It’s Fujitsu’s Robot Teddy Bear!
For about two years Fujitsu has been displaying this robot teddy bear, sometimes named Care Bear, Motion Bear or E-Bear, at various tradeshows. They tell us it is ‘still in development’ or ‘in a concept phase’. It does seem to be responding better every time I see it. In any case it manages to win the hearts of many already.
The area of (useful) application for this robot is comparable to that of Paro. Fujitsu Labs develops this “social robot with a personality” for use in “robot therapy”, for example for patients that suffer from dementia, says Fujitsu. The bear can display basic emotions through animatronics and react to its surroundings.
Sensors enable the robot teddy bear to respond to external stimuli; it is equipped with thirteen sensors (e.g. a webcam, touch sensors, etc.) in different locations on its body. The bear has a camera in its nose and machine vision to recognize human shapes, faces and (waving) gestures. It can see a person nearby and, for example, turn in their direction and make eye contact. If you wave at it, the bear waves back. It als senses being patted or stroked in various places and can respond, for example, by waking up (from sleeping) or with sounds. It has eight touch sensors in its body, two sensors in its arms detect when someone is shaking its hands, and gyroscopes and accelerometers detect when the bear is
The bear has twelve ‘degrees of freedom’ (joints): it can move both arms and legs, tilt its head, and move its eye brows and ears. Combining these basic motions, the robot bears are said to capable of up to 300 movement patterns including raising its arms, looking downwards and kicking its feet. The movement are combined with display of “emotions” to signal happiness, sadness and anger, says Fujitsu. And since the robot can be connected to the PC, new movements can be recorded and displayed.
The bear can apparently talk with the voice of a young boy, using a speech synthesizer and a built-in speaker. Thus, the sound can be synchronised with the robot’s other behavior. But, so far, I only heard him making strange giggly noises.
It would be good if Fujitsu created some more appropriate teddy bear sounds, whatever tose should be (in this respect, Paro has a clear advantage, as it makes very nice, affectionate baby seal sounds).
What makes these robots interesting, says Fujitsu,
is that they are interactive and real, in a world that is full of screens. The bears can be played with physically and are likely to integrate easily into people’s lives, says the company.
Fujitsu hopes its teddy bear can help develop “robot therapy,” a way to use robots to help people overcome challenges or problems, comparable to how “animal therapy” is used today, only without the hassle of having to clean up or deal with grumpy animals. As of 2010, Fujitsu has been testing it at several medical institutions, and it seems that the face recognition isn’t working as reliably as they want, delaying commercialization.
Well, hopefully we will be seeing more from this robot teddy bear soon when it comes available as a product. I think Paro could use a little competition in the market, don’t you?