'Makers' 3-D print shapes created using new design tool, bare hands

May 16, 2013 by Emil Venere
Shapes like these can be created using a new design tool that interprets hand gestures, enabling designers and artists to create and modify 3-D shapes using just their hands as a "natural user interface" instead of keyboard and mouse. The tool was created by Purdue researchers. Credit: Purdue University image/ C Design Lab

(Phys.org) —A new design tool interprets hand gestures, enabling designers and artists to create and modify three-dimensional shapes using only their hands as a "natural user interface" instead of keyboard and mouse.

The tool, called Shape-It-Up, uses specialized and a depth-sensing camera to observe and interpret hand movements and gestures. The user creates shapes in a computer by interacting with a virtual workspace as the shape is displayed on a large-screen monitor.

"You create and modify shapes using hand gestures alone, no mouse or keyboard," said Karthik Ramani, Purdue University's Donald W. Feddersen Professor of Mechanical Engineering. "By bringing hands into the with a single depth camera we are able to manipulate the 3-D artifacts as if they actually exist."

Researchers call the underlying technique shape–gesture–context interplay. The tool could have applications in areas including games, architecture, art and , and also serves the emerging "creative maker" community, he said. The team will demonstrate the technology at the Maker Faire on Saturday and Sunday (May 18 and 19) in the San Mateo (Calif.) County Event Center.

"Our goal is to make the designer an integral part of the shape-modeling process during early , which isn't possible using current CAD tools," Ramani said. "The conventional tools have non-intuitive and cognitively onerous processes requiring extensive training. We conclusively demonstrate the modeling of a wide variety of asymmetric 3-D shapes within a few seconds. One can bend and deform them in various ways to explore new shapes by natural interactions. The effect is immediate."

The creations can then be produced using a 3-D printer.

Research findings appeared in the February issue of Computer-Aided Design magazine. The paper was co-authored by Ramani, graduate students Vinayak and Sundar Murugappan and postdoctoral researcher HaiRong Liu. The paper is available at https://engineering.purdue.edu/cdesign/wp/?p=1571

This video is not supported by your browser at this time.

The research, funded by the National Science Foundation (NSF), addresses the limitations of conventional computer-aided design tools needed to create geometric shapes. Work to develop a model for transforming the research into market innovations was funded by the NSF's Innovation Corps, or I-Corps, program and recently by NSF's Accelerating Innovation Research (AIR).

The system harnesses the natural user interface of to create and modify shapes.

"We are going from Windows icons, menus and pointers - or WIMPs - to a post-WIMP, natural , or NUI," Ramani said.

The tool is an advance over a previous version that was limited to creating "rotationally symmetric" objects, or those having the same measurements on all sides.

The shapes are created using a 3-D printer. Credit: Mark Simons, Purdue University

"This is important because many of the things designers need to create are not symmetrical," Ramani said.

It uses the Microsoft Kinect camera, which senses three-dimensional space. The camera is found in consumer electronics games and can track a person's body without using handheld electronics.

Researchers created advanced algorithms that recognize the hand gesture, understand that the hand is interacting with the shape and then modify the shape in response to the hand interaction.

The Purdue C Design Lab in the School of Mechanical Engineering is collaborating with a startup company, ZeroUI.

"ZeroUI and Purdue are pioneering a whole new co-innovation model for university-industry collaboration where we are applying Steve Blank's and the NSF I-Corps customer-development process to academic research to ask the right questions and solve the right problems and helping to create high market impact," said Raja Jasti, ZeroUI's co-founder and CEO. "This technology is amazingly versatile with applications ranging from art, design and gaming to education."

Explore further: Researchers increase the switching contrast of an all-optical flip-flop

More information: Shape-It-Up: Hand gesture based creative expression of 3D shapes using intelligent generalized cylinders

Abstract
We present a novel interaction system, "Shape-It-Up", for creative expression of 3-D shapes through the naturalistic integration of human hand gestures with a modeling scheme dubbed intelligent generalized cylinders (IGC). To achieve this naturalistic integration, we propose a novel paradigm of shape–gesture-context interplay (SGCI) wherein the interpretation of gestures in the spatial context of a 3-D shape directly deduces the designer's intent and the subsequent modeling operations. Our key contributions towards SGCI are threefold. First, we introduce a novel representation (IGC) of generalized cylinders as a function of the spatial hand gestures (postures and motion) during the creation process. This representation allows for fast creation of shapes while retaining their aesthetic features like symmetry and smoothness. Secondly, we define the spatial contexts of IGCs as proximity functions of their representational components, namely cross-sections and the skeleton with respect to the hands. Finally, we define a natural association of modification and manipulation of the IGCs by combining the hand gestures with the spatial context. Using SGCI, we implement intuitive hand-driven shape modifications through skeletal bending, sectional deformation and sectional scaling schemes. The implemented prototype involves human skeletal tracking and hand posture classification using the depth data provided by a low-cost depth sensing camera (Kinect™). With Shape-It-Up, our goal is to make the designer an integral part of the shape modeling process during early design, in contrast to the case for current CAD tools which segregate 3-D sweep geometries into procedural 2-D inputs in a non-intuitive and onerous process requiring extensive training. We conclusively demonstrate the modeling of a wide variety of 3-D shapes within a few seconds.

Related Stories

Microsoft hand research ripens Kinect for work (w/ video)

Mar 07, 2013

(Phys.org) —Beyond reading body motions, Kinect is getting a workup by researchers at Microsoft, now showing substantial control additions. Microsoft Research this week showed how Microsoft Kinect for Windows ...

Recommended for you

Intelligent materials that work in space

Oct 23, 2014

ARQUIMEA, a company that began in the Business Incubator in the Science Park of the Universidad Carlos III de Madrid, will be testing technology it has developed in the International Space Station. The technology ...

Using sound to picture the world in a new way

Oct 22, 2014

Have you ever thought about using acoustics to collect data? The EAR-IT project has explored this possibility with various pioneering applications that impact on our daily lives. Monitoring traffic density ...

User comments : 3

Adjust slider to filter visible comments by rank

Display comments: newest first

DrEvilBetty
1 / 5 (2) May 16, 2013
I still can't understand how people think that waving your arms about and trying to get a camera and software to understand how many fingers you have up is easier or more efficient than using a mouse, tablet or even a keyboard for interacting with computers. I just picture a room full of people jumping around in their cubicles all day and waving their arms like they have semaphore flags.
antialias_physorg
3.7 / 5 (3) May 16, 2013
I still can't understand how people think that waving your arms about and trying to get a camera and software to understand how many fingers you have up is easier or more efficient than using a mouse, tablet or even a keyboard for interacting with computers. I just picture a room full of people jumping around in their cubicles all day

Because it opens up interaction scenarios where you aren't in a cubicle?

- Meetings with shared views (where you don't have to pass around the keyboard and mouse)
- Anything that has to be sterile (or that you don't want to touch..like interaction surfaces in shops)
-...

And last but not least: getting rid of all that stuff that clutters up your desk (and possibly getting rid of the desk itself).

Computing in the future will not look like it does now: Dedicated places with dedicated hardware where the user has to adapt to the technology rather than the other way around.
SolidRecovery
1 / 5 (3) May 16, 2013
I still can't understand how people think that waving your arms about and trying to get a camera and software to understand how many fingers you have up is easier or more efficient than using a mouse, tablet or even a keyboard for interacting with computers. I just picture a room full of people jumping around in their cubicles all day and waving their arms like they have semaphore flags.

First step to many things to come. People talk with their bodies and as Antialias said computers of the future will adapt to you. They will read you and analyse what you want done. The efficiency of the machines knowing what you want designed, or programmed will be much greater if you were to go in and design or program yourself.