Lego Rovers head to NASA's International Space Apps Challenge

Apr 10, 2013
VIDEO – Lego Rovers head to NASA’s International Space Apps Challenge
Dr. Louise Dennis developed the robots as a research tool, before interest from NASA.

A system that imitates navigation of a space rover, originally intended for use in North West schools, will become part of NASA's International Space Apps Challenge later this month.

Dr Louise Dennis, from the University of Liverpool's Department of Computer Science, designed a programme that allows users to configure commands for a Lego Rover robot based on whether the machine is on the moon, Mars or in the same room.

Artificial intelligence

Dr Dennis said: "We originally developed the robot as a to investigate issues around , but at the same time we were quite interested in producing some sort of activity that could be taken into schools.

"If you are controlling a planetary rover from Earth, you have to deal with the time delay. Our system allows the children to experiment with driving the rover when there is a time delay and see how that affects behaviour."

The Lego Rover was taken into schools in Manchester and proved popular, but when teachers asked if there was software available that would allow them to run their own model Lego Rovers in a similar way, Dr Dennis was unable to offer anything accessible.

She said: "We realised it was going to be very difficult for someone without a lot of expertise to install the programme, because it was built on top of a big base of research software."

VIDEO – Lego Rovers head to NASA’s International Space Apps Challenge
The two Lego Rover robots can be programmed to respond as if they were on the moon, or Mars.

The system was originally developed as part of a series of EPSRC funded projects by the University's Centre for Technology, before being identified as a potential STEM activity in schools.

The challenge of creating more accessible was submitted to Exeter Hackathon and subsequently picked up by NASA as one of the organisation's global challenges.

The NASA International Space Apps Challenge focuses on and runs over 48 hours in 75 cities across the globe, from Abu Dhabi to Adelaide, New York City to Ho Chi Minh City. It aims to create open source solutions to a selection of problems through the combined effort of enthusiasts and experts based around the world.

User interface

Dr Dennis, a Research Associate currently working on a major EPSRC funded reconfigurable autonomy project alongside industrial partners like BAE Systems and Network Rail, said: "It would be really nice to have something that, once it has been taken into schools, they can take and play with themselves. It would also be great to have some people get at the user interface design."

The NASA International Space Apps Challenge takes place over April 20 – 21, and includes 23 NASA challenges and 25 non-NASA challenges, of which Dr Dennis' Lego Robots Challenge is one.

She added: "It's a bit circular because the programme code we started out with was based on code NASA produced so to come full circle and be taking it back to again is very exciting."

Explore further: Flying robots will go where humans can't

add to favorites email to friend print save as pdf

Related Stories

NASA presents software of the year award

Dec 09, 2011

(PhysOrg.com) -- Autonomous Exploration for Gathering Increased Science (AEGIS), novel autonomy software that has been operating on the Mars Exploration Rover Opportunity since December 2009, is NASA's 2011 ...

Three generations of rovers with crouching engineers

Jan 20, 2012

(PhysOrg.com) -- Two spacecraft engineers join a grouping of vehicles providing a comparison of three generations of Mars rovers developed at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The setting ...

Curiosity rover checks in on Mars using Foursquare

Oct 03, 2012

(Phys.org)—NASA's Curiosity Mars rover checked in on Mars Wednesday using the mobile application Foursquare. This marks the first check-in on another planet. Users on Foursquare can keep up with Curiosity ...

Could the next planetary rover come from Canada?

Oct 30, 2012

The Canadian Space Agency is well known for its robotics but they've recently expanded from robotic arms to building prototypes for five new rovers, designed for future lunar and Mars missions. They range ...

Recommended for you

Flying robots will go where humans can't

13 hours ago

There are many situations where it's impossible, complicated or too time-consuming for humans to enter and carry out operations. Think of contaminated areas following a nuclear accident, or the need to erect ...

Will tomorrow's robots move like snakes?

Sep 16, 2014

Over the last few years, researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have developed biologically inspired robots designed to fly like falcons, perch like pigeons, and swim ...

Robot Boris learning to load a dishwasher (w/ Video)

Sep 12, 2014

Researchers at the University of Birmingham in the U.K. have set themselves an ambitious goal: programming a robot in such a way as to allow it to collect dishes, cutlery, etc. from a dinner table, and put ...

Deep-sea diver hand offers freedom and feedback

Sep 12, 2014

Bodyskins and goggles are hardly the solution for divers who need to reach extreme depths. The Atmospheric Dive Suit (ADS) gives them the protection they need. Recently, The Economist detailed a technology ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

Lurker2358
not rated yet Apr 10, 2013
Isn't it just a matter of generalization and modularity?

You need the software and the hardware to be "plug and play" so that you could just add random components; arms, lets, wheels, sensors, etc. Have ports to operate those components' motors or convey data back to the processor.

How is this any different than any USB or other device? You'd have software to recognize each new device by port, and it would load that devices controller software. It could be on a flash drive hidden in the framework of each device to conserve total modularity. You would just need to add a handler to the A.I. which would understand what the add-on is and how to use it. The USER could configure variables, such as basic and advanced behavior options, error margin, error event handling, etc, but each component would come with it's own default setting as well for people who can't understand the advanced options.

Additionally, you'd standardize a scripting language for skilled users to make custom components