Report provides NASA with direction for next 10 years of space research

Apr 12, 2011

During the past 60 years, humans have built rockets, walked on the moon and explored the outer reaches of space with probes and telescopes. During these trips in space, research has been conducted to learn more about life and space. Recently, a group of prominent researchers from across the country published a report through the National Academy of Sciences that is intended as a guide as NASA plans the next 10 years of research in space. Rob Duncan, the University of Missouri Vice Chancellor for Research, led the team that developed a blueprint for fundamental physics research in space for the next 10 years.

"When Einstein developed his , no one at the time knew exactly how it could be applied. Yet that basic, opened many doors for us, including the development of technology that led to Global Positioning Systems (GPS)," Duncan said. "Many trillion-dollar technologies are based on these 'basic science' discoveries, so it is vital that we continue to explore these scientific questions that, we hope, will continue to lead to technological advancement. We must continue to develop knowledge out of our curiosity alone, since that often leads to great opportunities. If we stop exploring the unknown, then we will fail to discover things that may be of great importance to our economy in ways that may be difficult to predict."

Duncan's committee, which consisted of Nicholas Bigelow from the University of Rochester, Paul Chaikin from New York University, Ronald Larson from the University of Michigan, W. Carl Lineberger from the University of Colorado, and Ronald Walsworth from Harvard University, developed two overarching "quests" and four specific "thrusts" for fundamental physics research as part of the report, "Recapturing a Future for Space Exploration: Life and Physical Sciences Research for a New Era." The National Research Council will present the report to NASA.

The first quest is to discover and explore the physical laws governing matter, space and time. The second quest is to discover and understand how complex systems are organized. For example, ferns grow with a distinct symmetry and structure to their leaves that are similar to the overall shape of the whole plant. This is an example of "self-similarity" in nature, which could be explored in greater detail in space.

The four specific thrusts that the committee recommended NASA explore in the coming decade are:

  1. Soft Condensed Matter Physics and Complex Fluids – While some examples exist of this new class of materials, understanding the organizing principles of these new materials, which are typically very strong but very light, could advance materials science dramatically on earth.
  2. Precision Measurements of Fundamental Forces and Symmetries – This would help scientists determine what is not known about the composition and structure of the universe. For example, some cosmic rays have 100 billion times more energy than the highest energy particles ever produced in "atom smashers" on earth.
  3. Quantum Gases – Understanding quantum gases can lead to a much better understanding of how particles fundamentally interact with each other. Examples of these materials include superconductors and superfluids. Superconductors are materials that conduct electricity with no resistance while superfluids are those fluids (such as helium at very low temperatures) that have no resistance to fluid flow.
  4. Condensed Matter – As matter changes into different states, such as solid, liquid and gas, phase changes happen that are similar throughout nature. By studying these changes in space, scientists can alleviate the complication of gravity and better understand the physics effecting these changes.
"This is a fascinating time to be a scientist," Duncan said. "As NASA moves forward and develops a new space mission, we hope that this report will help guide the scientific portion of exploration. The possibilities of discovery are endless."

Explore further: Launch pad where rocket exploded back next year

Related Stories

NASA chief discusses space economy

Sep 17, 2007

U.S. space agency chief Michael Griffin, speaking of space and the global economy, launched a lecture series honoring the space agency's 50th anniversary.

Why The Moon?

Dec 05, 2006

If you asked 100 people why we should return to the moon, you'd probably get 100 answers - or more! Over the past year, NASA posed this question not just to 100 people, but to more than 1,000 from around the ...

MoonLITE mission gets green light for next step

Dec 05, 2008

A possible UK-led Moon mission involving 'penetrator' darts that would impact into the Moon's surface will be the focus of a technical study to ascertain its feasibility, the British National Space Centre (BNSC) announced ...

Recommended for you

NASA considers possibilities for manned mission to Venus

3 hours ago

(Phys.org) —NASA's Systems Analysis and Concepts Directorate has issued a report outlining a possible way for humans to visit Venus, rather than Mars—by hovering in the atmosphere instead of landing on ...

User comments : 25

Adjust slider to filter visible comments by rank

Display comments: newest first

ubavontuba
2.6 / 5 (8) Apr 12, 2011
Three words: Robotics, robotics, robotics.

More can be done to safely advance space exploration with less, and autonomous robotic development could be a boon to the economy.
Quantum_Conundrum
3.8 / 5 (4) Apr 12, 2011
Three words: Robotics, robotics, robotics.

More can be done to safely advance space exploration with less, and autonomous robotic development could be a boon to the economy.


Unfortunately, there aren't very many people in the world working on "real" robotics, and certainly not general purpose robotics.

Japan does a lot of cheap gimmicks which amount to toys.

Other than that, the U.S. military is about the only entity I'm aware of that is taking robotics seriously.

Other than that, there's some companies making experimental robots with useless features like "human-like" facial expressions, etc. Why not make a labor robot? I want to socialize with humans, not a droid

In the past the excuse was the computers weren't fast enough, etc.

Now, we have dual core smart phones which have thousands of times more processor and memory than the most advanced NASA probes and rovers.

The only excuse is lack of creativity on the part of engineers and software developers
TheRedComet
not rated yet Apr 12, 2011
Come on QC what about NASA's Robonaut2 they just sent to the International Space Station.
soulman
4.5 / 5 (2) Apr 12, 2011
To me that seems like a strange top x list for NASA to be doing. I would only put point 2) on such a list. All the other options seem to be outside NASA's traditional expertise and can be handled better within normal research labs where a lot of that type of work is already being performed.
soulman
4.4 / 5 (5) Apr 12, 2011
Unfortunately, there aren't very many people in the world working on "real" robotics, and certainly not general purpose robotics.

What do you call 'real' robotics? Before you can have a general purpose robot, you need to get a brain, or an artificial general intelligence (AGI), to drive the hardware, otherwise it's just narrow, task specific hardware.

In any case, you don't need a general purpose robot for space missions. You just need something that's extremely good at what it needs to do within the mission parameters.

Japan does a lot of cheap gimmicks which amount to toys.

Yes, that's because there is still no AGI to make the toys not a gimmick.

Other than that, the U.S. military is about the only entity I'm aware of that is taking robotics seriously.

Right, but in narrow, mission specific applications.

more...
soulman
5 / 5 (2) Apr 12, 2011
Other than that, there's some companies making experimental robots with useless features like "human-like" facial expressions, etc. Why not make a labor robot?

There already are a few million of those. How do you think a car gets assembled and painted? Or circuit boards populated? Or medical robotics?

Now, we have dual core smart phones which have thousands of times more processor and memory than the most advanced NASA probes and rovers.

Yes, we have faster hardware. Even so, NASA never uses bleeding edge stuff as it needs reliability and proven performance. What's more, the h/w needs to be toughened for radiation, temperatures and vibration, which rather limits their options.

more...
soulman
5 / 5 (2) Apr 12, 2011
The only excuse is lack of creativity on the part of engineers and software developers

Hardly. There are very real limitations, which I have already pointed out. And until there is a breakthrough in AGI, all so-called general robotic system will continue to be disappointing, while focused, task specific robots will continue to grow in capability.
Jotaf
not rated yet Apr 12, 2011
QC's view is a bit extreme but for the most part I agree, there's lots of toying around and not enough robotic research focused on specific problems.

Robonaut2 is a good example of a "toy" robot in the sense that it is radio-controlled; no outstanding autonomous behavior there. Same with the Japanese dancing robots.

There may some research that I'm ignoring, but my impression is that there's a gap between pure AI research, that assumes a perfect symbolic world and is concerned with high-level problem solving, and low-level machine learning, where we have to deal with raw sensor data that is mostly garbage, to make some sense of the real world.

IMO we won't see any intelligent robots until we bridge this gap. And this puts the burden entirely on us machine learning scientists to deliver good quality data inference, or everyone else (ie robotics scientists) will just keep running around in circles, wondering why simple thresholds are not enough and why their assumptions don't hold...
Jotaf
not rated yet Apr 12, 2011
To give you an example, the popular Youtube videos for "Quadcopter aggressive maneuvers" show flying robots that rely on calibrated absolute-position sensors (IR cameras and markers). If they only used on-board sensors they'd crash every other flight or so.
Quantum_Conundrum
1 / 5 (1) Apr 12, 2011
There already are a few million of those. How do you think a car gets assembled and painted? Or circuit boards populated? Or medical robotics?


You are talking about automated movers, palletizers, and in some cases maybe an automated pneumatic wrench.

Those are all great, but not what I'm talking about.

BTW, I used to work in a highly automated manufacturing facility, so I know how that stuff goes.

What I'm talking about is a demi-human robot which can replace the workers on the assembly line, as well as the bottom-tier laborers in farms, retail, and convenient stores.

Many of these jobs do not require a "Mr. Data" or even "terminator" level intelligence, as they are mostly simple, repetitive labor tasks and customer inquire tasks.
Quantum_Conundrum
1 / 5 (1) Apr 12, 2011
To give you an example, the popular Youtube videos for "Quadcopter aggressive maneuvers" show flying robots that rely on calibrated absolute-position sensors (IR cameras and markers). If they only used on-board sensors they'd crash every other flight or so.


Well, what you'd do is have more sensory input. Add additional cameras to allow perception imaging of the environment, much as humans have binocular vision. You could also have LIDAR device on it. For robots, I'm actually a fan of orientable "trinocular" vision, since 3 points define a plane and 4 points (the viewed object) define a space. I like LIDAR as an additional sensory tool.

The robot (or the controlling computer) would map it's environment and continually watch for changes. When something changes it updates it's internall maps. This way it "learns" the optimal way to work in it's environment through spacial analysis and by navigating that environment one time to get a full 3-d picture in multiple spectra.
soulman
5 / 5 (1) Apr 12, 2011
my impression is that there's a gap between pure AI research, that assumes a perfect symbolic world and is concerned with high-level problem solving

The focus in AI has not been concerned with that approach for years, if not decades. The top-down, symbolic parsing route to AGI is very much old-school and no one in the field today would hold the view that AGI can be achieved by these methods.

Today's trends are to use the grass roots approach, by reverse engineering the brain's organization, connections and neuron function. There are several projects which attempt to simulate, down to the molecular level, the complete functioning of each type of neuron in the brain, down to ion channel operation.

Once a complete neural model is achieved (it pretty much has), it can be wired up with other neurons to see behavior patters and how closely they match with the real brain. Project Blue Brain and MCell (I think) are two such projects (though the former is more for medical research)
soulman
5 / 5 (2) Apr 12, 2011
Elsewhere, its been found that while the brain appears highly complex, the neocortex is built on a series of simpler, hierarchical networks which work similarly but tend to deal with inputs in increasingly higher levels of abstraction (for each successive level).

The first level deals with raw environmental data where some filtering is performed (say edge detection), data is culled and the concept of edges is passed to the next level which does its own filtering and data abstraction and moves that up the chain. Ultimately, you may have a single neuron bundle which is used to represent a dog, for example.

These type of systems need no a priori definitions or symbolic inference engines to drive them. They learn from scratch and are extremely proficient. See Hierarchical Temporal Memory.

913spiffy
not rated yet Apr 13, 2011
This is such a Wonderful Day to celebrate the past & look into the possible future. We seemed to have to hack & claw our way, so many years ago. Now, we almost appear to be FLYING into a much faster future. The Next 50 years are going to be Something!
OF COURSE, none of this means anything without our BELOVED SPACE CHICKEN CAMILLA SDO, GO CAM GO!!! We LOVE her on FB! 0=)
ubavontuba
1.8 / 5 (4) Apr 13, 2011
Using robotics in space, does not require AI. We just need to extend the development and use of semi-autonomous programming, telerobotics, and gaming (yes, gaming).

In a practical way, the smartest AI today is gaming AI. Game characters uniquely use their environment and are adaptable to changes in the environment. If you want to colonize Mars, say, there's a game app for that. Why not simply program semi-autonomous machines to behave like game characters? They don't actually need to perceive the real environment. They only need function as game characters on a game map with a few sensors to keep the central game program aware of their locations and movement on the map. Then, all you have to do is a detailed scan of the environment, program the topology, and identify and graphically insert movable items. Then, have at it!

Heck, even a toddler can direct Pooh to grow a garden in a game. Why should we over complicate a similar utilization of robotics in the real world?
farmerpat42
not rated yet Apr 13, 2011
"Game characters uniquely use their environment and are adaptable to changes in the environment."

The second statement I believe is false - any change in the environment for a game character needs to be associated with an appropriate set of data to deal with that environment. For example, specifically with movement, paths need to be established for the character to follow. The AI character in a game is moving along invisible 'rails' which are a lattice that contours with the game world. Now, it uses basic decision tools to figure out the best path - but that path also has other meta information about the environment (ie perhaps there's a small jump that can be performed to go to a higher ledge). In a perfectly flat 'parking lot' setting this isn't a big deal, but in rugged settings this could mean mission success or failure. Gaming has done a lot for computing and some high level AI research, but I don't think gaming's movement is practical for a deployed robot.
Jotaf
not rated yet Apr 13, 2011
Regarding symbolic / general AI research, I meant that it stopped there and there was no (effective) effort to move it towards real-world data.

I don't want to dismiss a whole field of research out-of-hand, of course, but I don't trust black boxes; my opinion is that you need to understand what is going on.

I know many researchers working on neural networks and for most applications there's already enough pattern recognition theory to do what they do, and better, except you actually know what is going on, instead of blindly exploring a huge space of possibilities. There's this illusion that it will magically become self-aware with little understanding of it.

Game characters deal with a symbolic world. Data from sensors, even fusing multiple cameras, is very messy. If what you're saying is right, 3D occupation maps and bayesian sensor fusion should work, but they have been around for decades and don't work outside a lab...
PonPeriPon
not rated yet Apr 13, 2011
Thank you soulman; very informative.

The whole is only as good as its parts; once we have sufficient materials and AI, the form will follow.
El_Nose
not rated yet Apr 13, 2011
@SOULMAN

nice exploration of AI, and accurate to boot. I disagree however that 1) 3) and 4) are outside of NASA research capabilities or interests however because we are seeing most of those occurances of superfluids in space - indeed it is suggested the cores of certain stars are superfluids - nuetron stars i think -Condensed matter is undoubtably NASA studying phase changes without the effect of gravity

I would like to point out that AI has changed in a since that anything that can be done by a computer but faster or more effectiently than by a human is now AI. It has allowed us to better tackle problems and algorithm development as we have simplified our approach to many problems -- we often thought it took more thought than it really did to solve certain problems... and we have discovered that maybe we don't need to be as accurate of perfect in problem solving to acheive viable and practical answers.
soulman
5 / 5 (1) Apr 13, 2011
don't trust black boxes; my opinion is that you need to understand what is going on.

You must be an engineer :) Do you trust yourself, your family or your friends? You certainly don't understand how their (or your) 'black boxes' function at any deep level, and yet I bet you do trust them/you. Why should AGI's be any different? Ultimately, you judge them by their actions, not by which electron travels down which circuit. That's why engineers could never build an AGI, as evidenced by the previous failed top-down approach.

except you actually know what is going on, instead of blindly exploring a huge space of possibilities.

What's wrong with that? There are programs which do that sort of thing and which have (re)discovered basic laws of physics and even some previously unknown mechanisms. It's a great way to explore the solution space for novel insights. It's really cool stuff!

more...
soulman
5 / 5 (1) Apr 13, 2011
There's this illusion that it will magically become self-aware with little understanding of it.

I pretty much expect that that is how it will happen, but no magic required, just emergent properties.

If you point to a single neuron and ask, is it self-aware? No. Two neurons, ten? No. If you keep going at some point the answer will be yes. But by then, you'll likely be drowning in complexity, so you'll be none the wiser. It happens somewhere along the continuum of complexity and organization.

Game characters deal with a symbolic world. Data from sensors, even fusing multiple cameras, is very messy.

It's precisely because the real world is messy, that we need to deal with it from the bottom up. Gaming, as such, isn't that important IMO. Sure, game THEORY has been a traditional tool in AI from day one, but it can only go so far.

But that's game theory, not gaming with 3D characters in limited environments, as most PC gamers understand it, where limited AI is applied.
soulman
5 / 5 (2) Apr 13, 2011
I disagree however that 1) 3) and 4) are outside of NASA research capabilities or interests however because we are seeing most of those occurances of superfluids in space - indeed it is suggested the cores of certain stars are superfluids - nuetron stars i think -Condensed matter is undoubtably NASA studying phase changes without the effect of gravity

I don't wholly disagree, which is why I said that they were not within NASAs traditional expertise. I realize that sometimes experiments need to be performed in zero-G in those areas, and NASA is perfectly positioned to design, build, launch and oversee the operation of the experiment. But the analysis is better done in the external dedicated labs.

I think NASA is better placed to focus on engineering rather than on peripheral research projects, which just eats into its limited budget.
Jotaf
not rated yet Apr 16, 2011
Thanks for your reply soulman! I don't claim we shouldn't "trust" a general AI system simply because it is a black box. That is more of a moral debate that (hopefully) will only become relevant years from now.

What I meant was from the perspective of choosing a research direction - I think we need to focus more on knowing our proposed systems inside out, instead of attributing it emergent behavior that may never happen. Why, since we're building bigger and bigger brains, you might say? Because they're not "brains". They're masses of simplistic models, initialized with mostly random conditions. The likelihood that they will "work" is minuscule.

If we keep focusing on "bigger, faster, but dumb" brains, we'll probably leave out some crucial element - be it some details of the transfer functions, necessary simulation of some esoteric neuro-transmitters or initial connections or configurations necessary for intelligence. Don't let magical thinking get in the way of the scientific method!
Jotaf
not rated yet Apr 16, 2011
Oh, and I'm not saying these guys aren't being scientific, just that maybe they got a little carried away. The study of the biological brain is also pretty advanced.

But we cannot forget that all of those are tangential to what everyone really wants to get at: proposing how to engineer, not a bunch of perceptrons, but a functioning synthetic brain. And IMO many people are dodging this issue a little bit.
WestCoast101
5 / 5 (1) Apr 17, 2011
A very long time ago, I was one of those kids who responded to the Race to the Moon and Mars [even the non-existant missle gap]. Very exciting stuff, that spurred legions of us to study rather than party. I have no doubt that the four NASA goals shown here will lead to fundamental knowledge and maybe a few 'widgets' just like the Space Race did. However, none of this is particularly exicting for those of us that want to GO to other solar systems ... Couldn't the goals be just a little more STAR TREKKIE?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.