Software development: speeding from sketchpad to smooth code

July 30, 2009

( -- Creating error-free software remains time consuming and labour intensive. A major European research effort has developed a system that speeds software development from the drawing board to high-quality, platform-independent code.

According to Piotr Habela, technical coordinator of the VIDE (for VIsualize all moDel drivEn programming) project, have many good ideas about how to visualise, develop, debug and modify software, plus standards to guide them. The problem is that the design and development process has always been fragmented.

He explains that methods for visualising or flowcharting how a should work do not lead directly to computer code.

Software written in one programming language may be difficult to translate into another. No matter how carefully programmers work, complex software almost always includes errors that are difficult to diagnose and fix. Because of the lack of precise links between a program’s features and the software that implements them, updating or modifying a program often turns out to be time-consuming and costly.

“What we attempted that was quite distinct,” says Habela, “was to make the development of executable software a single process, a single toolchain, rather than a sequence of separate activities.”

It took two-and-a-half years of intensive effort by VIDE’s ten academic and industrial research partners, funded by the European Union, but the result is a software design and development toolkit that promises to make creating well-functioning, easily-modified software - for example for small businesses - significantly smoother, faster, and less expensive.

Model driven architecture

A key part of VIDE’s approach was to build on the idea of Model Driven Architecture, a programming methodology developed by an international consortium, the Object Management Group.

The idea is that each stage of software development requires its own formal model. The VIDE team realised that by creating and linking those models in a rigorous way, they could automate many of the steps of software development.

A software developer might start by working with a domain expert - for example a business owner - to determine what a new program needs to do. Those inputs, outputs and procedures would be formalised in what is called a computation independent model (CIM), a model that does not specify what kinds of computation might be used to carry it out - it lays out what the program will do rather than how it will do it.

“Models are usually considered just documents,” says Habela. “Our goal was to make the models serve as production tools.”

In the case of VIDE, much of that modeling is visual, in the form of flowcharts and other diagrams that are intuitive enough for the domain expert to understand, but which are sufficiently formalised to serve as the inputs to the next stage of the software development process.

To carry out these first modeling steps, the researchers created a domain analysis tool and a programming language called VCLL, for VIDE CIM Level Language.

From CIM to PIM to program

Once they have produced a formal CIM of the program they want to implement, it’s time to move a step closer to a functioning program by translating it into a platform independent model, or PIM.

For the VIDE team, a PIM is a model that specifies precisely what a program needs to do, but at an abstract level that does not depend on any particular programming language.

The researchers developed several software tools to produce a usable, error-free PIM. These include an executable modelling language and environment, a defect-detection tool, and finally a program that translates their final model into an executable Java program.

Luckily, the researchers did not have to build their system from the ground up. They were able to rely to a large extent on a pre-existing modeling language called UML, for Unified Modeling Language. UML provides a systematic way to visualise and describe a software system.

“We now have a kind of prototyping capability built into the development process,” says Habela. “You can design a model, specify its behavioural details, run it with sample data to see how it behaves, and then check with the domain expert to see if it is in fact the behaviour they expected.”

Several of the consortium members are implementing the VIDE toolkit in specific areas, for example web services, database management, and a variety of business processes.

Habela cautions that reaching VIDE’s goal of smoothly automating the entire software design and development process requires more work. Because of the broad scope of the project and the fundamental changes they are making, they are not yet ready to deploy the complete system.

However, he says, they have gone a long way towards clearing up “the muddy path from requirements to design.”

More information:

Provided by ICT Results

Explore further: Taking the hard work out of software

Related Stories

Taking the hard work out of software

July 28, 2009

Developing software is a complicated and laborious process. A new European platform automates much of the tricky building and testing phases of programming.

Software 'Chipper' Speeds Debugging

October 1, 2007

Computer scientists at UC Davis have developed a technique to speed up program debugging by automatically "chipping" the software into smaller pieces so that bugs can be isolated more easily.

Keeping computing compatible

September 25, 2008

( -- As distributed computing becomes universal, the programs that make devices work really have to work together. European researchers have gone back to basics to create a development toolkit that guarantees ...

Goodbye to faulty software?

July 15, 2008

Will it ever be possible to buy software guaranteed to be free from bugs? A team of European researchers think so. Their work on the mathematical foundations of programming could one day revolutionise the software industry.

Embedded software made simpler yet more powerful

May 22, 2006

The current decade will probably be known as the dawn of pervasive computing, when PCs were dethroned by technology to embed computers in almost everything. The hardware already exists to add features such as artificial intelligence ...

Recommended for you

New method analyzes corn kernel characteristics

November 17, 2017

An ear of corn averages about 800 kernels. A traditional field method to estimate the number of kernels on the ear is to manually count the number of rows and multiply by the number of kernels in one length of the ear. With ...

Optically tunable microwave antennas for 5G applications

November 16, 2017

Multiband tunable antennas are a critical part of many communication and radar systems. New research by engineers at the University of Bristol has shown significant advances in antennas by using optically induced plasmas ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Jul 31, 2009
At the NCC over 20 years ago we developed the 'Intermediate Knowledge Representation". This represented the domain, task, inference and problem solving strategies in a software independent way.

As a test two teams implemented the same software system from a IKR in different lagauages (It was a KBS system for network fault diagnosis).

So the above is not new. UML is formal and by that very nature implies constraints and hence distortions that may give rise to problems and bugs.

Native explicit language is what is needed.

Ged Haydon

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.