now browsing by month
An experiment by computer scientists at the University of Washington has demonstrated that robots can learn more efficient ways to complete tasks through crowdsourcing. By using machine-learning techniques to analyze large amounts of date, robots can create their own solution based on information from hundreds or even thousands of suggestions.
Previous experiments have shown imitating humans to be an effective way of teaching robots new tasks, but it is not without pitfalls. Sometimes a robot is incapable of completing a task the same way a human does, or cannot accurately analyze the critical components necessary to complete the objective. To try to eliminate these problems, researchers decided to give a robot more than one example to analyze.
“Because our robots use machine-learning techniques, they require a lot of data to build accurate models of the task. The more data they have, the better model they can build,” says Maya Cakmak, assistant professor of computer science, in a statement. “Our solution is to get that data from crowdsourcing.”
Computer scientists presented the robot with a task. In this case, assembling a picture out of colored blocks. The scientist would give a subject, such as a turtle, and then present one possible solution for portraying a turtle using the blocks. The researchers then used Amazon’s Mechanical Turk (AMT) service to generate more solutions. AMT users complete micro-tasks in exchange for small payments, usually between 1 cent and 20 cents. AMT is relatively well-known and therefore offered thousands of results, but quality control was poor, as some users intentionally provide poor solutions in an attempt to derail research.
In addition to the originally proposed representation of a turtle, the robot analyzes the thousands of solutions offered by AMT users, giving them higher precedence based on ratings from other users and analyzing the most important factors in the design. It then creates its own model of a turtle. Usually the robot’s solution is simpler than the user suggestions to allow it to more easily complete the task, but it still contains all the major elements of the design.
“The end result is still a turtle, but it’s something that is manageable for the robot and similar enough to the original model, so it achieves the same goal,” Cakmak says in the statement.
The University of Washington researchers felt that the concept held promise, but that the issue of quality control was “nontrivial.” During the experiment, some spammers used multiple accounts to provide the same solution repeatedly, and others failed to follow the instructions completely. For the method to become commercially viable, better safeguards against this type of behavior would have to be created.
Their research work will be presented at the Conference on Human Computation and Crowdsourcing in November.
Original Article from Techtimes