PTA-2046-coverThe latest issue of Philosophical Transactions A looks at the emerging field of heterotic computing. The issue was guest edited by Viv Kendon at the University of Durham, and Angelika Sebald and Susan Stepney at the University of York, and here, they tell us a bit about the subject area and describe the challenges that remain to be tackled.

 

What is heterotic computing?
We are interested in unconventional computing, which covers many areas including computing with novel substrates in novel ways. A few years ago we noticed a pattern: a range of novel computers have at least two distinct layers, of novel substrates composed with a more conventional control layer; they are essentially heterogeneous systems. We also noticed that while the individual layers have their own computational power, the combination is often significantly greater: working together the substrates can do more, or faster, or more naturally, than working alone. We searched for a term to describe such systems, and came across the word “heterotic”. This comes from the Greek heterosis, and is used in genetics to mean ‘hybrid vigour’. This seemed ideal, so in 2011 we put a paper into the International Conference on Unconventional Computation describing the concept.

 

Tell us a bit about what your theme issue covers.
In 2013 we had the opportunity to organise a Theo Murphy International Scientific Meeting on Heterotic Computing. Scientists from around the world, and across the disciplines, came together to discuss an amazing variety of computational substrates. This special issue has grown out of that meeting. It includes papers covering a wide range of computational substrates: bacterial and slime mould computers, microscopic self-assembling tiles of sticky DNA, neurons bonded to silicon, robot chemists synthesising novel compounds in droplets that move and divide spontaneously, and complex chemical reactions in droplets controlled by microfluidics. The issue also covers theory: novel computers require novel software engineering techniques to allow us to specify, design, program and test these devices. Many of the underlying assumptions of conventional computation do not apply to these devices: there are papers on novel testing regimes, how to cope with devices that compute over a broad spatial extent restricted to local communication, how to model devices that self-assemble and grow as they compute, and even what it means for a collection of interacting physical systems to compute.

 

Tell us about the image on the front cover.
It shows a slime mould computing. This particular slime mould, Physarum polycephalum, is a large but single celled organism. Computational input can be supplied in the form of oat flakes, and the slime mould grows between the food sources in a way that approximately minimises path lengths. By setting up the system correctly, the slime mould can be induced to perform simple logical operations, and more complicated path length minimisation operations. The image was supplied by Professor Andrew Adamatsky, who researches many novel applications for slime moulds

 

Are there any papers in the issue that stand out for you?
All the papers are exciting in their own ways, and depending on your particular interests, different ones will leap out. One we ourselves find particularly significant is by Nehaniv, Rhodes et al, applying some deep mathematics to understanding and analysing biological and chemical processes computationally. The particular point of interest for us is the way it provides a natural basis for modelling computers that grow: conventional models tend to assume the computational substrate is given initially; this model allows the substrate to grow in a way dependent on the computation being performed.

 

What are the big challenges still remaining in the field?
We see three main challenges.

The first is connecting the substrates. An issue with these devices is that information is represented in a way natural to the particular substrate, and when it needs to be communicated to a different substrate, there may be some complex transduction required, which itself might require significant computation. Also, there may be timescale mismatches, with substrates operating at very different speeds. This issue is already recognised in, for example, optical computing, where a lot of effort is made to keep everything optical in order to reduce the transduction overheads. However, this can’t be the solution for heterotic devices where we want to exploit the power of the combination.

Secondly, the devices will need to be scaled up. This is the move from the research laboratory bench to engineered production capability. And in order to make this happen, we will need a “killer app” for the various devices.

Thirdly, we need better software engineering tools, to move programming the devices from an art to at least a craft! Conventional tools are honed to our conventional devices and conventional techniques: we need new tools targeted at the unconventional substrates and their unconventional applications.

There is a lot of active and fun research in this area, and we expect exciting new developments—new substrates, new applications, new products—which could potentially change the way we think about computing and computers. Watch this space!

Leave a Reply

  • (will not be published)