Engines of Creation by K. Eric Drexler

Engines_of_CreationThis has to be one of my favourite books ever. I’m so embarrassed that I hadn’t read it before now.

The book concentrated on how nanosystems will be used to transform our lives, our bodies, and the environment. There was also a discussion on how we will control nanosystems such that they do not replicate uncontrollably. The book was a nice introduction to the topic, not too heavy, and written with a powerfully optimistic style. I felt that the chapters on government policy were the weakest point, although to be honest that’s probably just my personal taste. They were well written, just not quite as gripping as the discussion of the actual technology itself.

The entire book was great reading, although I felt Chapter 14 in particular had something important to say; a lesson to be learnt. The focus of this section “A network of knowledge” was on the then future technology of hypertext, linked media and general freedom of information/knowledge aggregation techniques. It really stood out for me, because it’s the only chapter in the book where that technology today has not only been realised, but has exceeded Drexler’s foresight tenfold. Reading this chapter was so beautifully quaint, until a thought struck me…

…It could have been any of the chapters that had been fully realised.

Presumably all the chapters were written with a similar level of foresight, it just happened that the correct set of factors converged to cause 14 to be the first. I’m sure in due course more will follow, but this chapter sat somewhat uncomfortably and laughably in stark contrast to the rest of the (seemingly visionary) book. It humbly served to highlight our different behaviour towards incredible concepts that have already been realised, and those that still harbour engineering problems to be solved. Some would refer to the latter as (rather derogatorily) pure science fiction.

I also really enjoyed the last chapter too. Drexler’s writing style seriously moved me. I hope that people find this book a call to arms, and that those who read it when it was first published (1986) will take the time to re-read, and realise that we are closer to these dreams, enough so to really do something about it. In 10 years time I want to feel that quaint warmth when I read ALL the chapters, not just the one about hypertext.

I have the nanosystems textbook waiting on my bookshelf for some more in-depth learning.


8 thoughts on “Engines of Creation by K. Eric Drexler

  1. Geordie says:

    There were a series of debates between Richard Smalley and Drexler that are very interesting to read. There is a tie-in to QC. Smalley says Drexler is full of shit because you can’t design nanomachines for a variety of reasons that all come down to the end to our inability to simulate quantum systems. Basically his argument is that simulation tools are necessary for engineering and you can’t simulate quantum systems. Smalley wins the argument in a world without QCs but loses it if QCs can be built. Drexler’s counterargument was that we already have (evolved) nanomachines so it’s possible. I don’t think though that an existence proof counters Smalley’s argument though. Lots of things exist that we can’t build.

  2. quantummoxie says:

    What the result of this debate clearly hinges upon, based on what Geordie said, is the definition of ‘nano’ in this instance and precisely where the quantum/classical boundary is located (though, based on recent results that suggest the possibility of macroscopic quantum correlations, I’m not sure it is necessarily a macro vs. micro thing).

  3. dark_daedalus says:

    “Lots of things exist that we can’t build”…

    I would agree, but this is not because they are impossible i.e. prohibited by Physics, but due to the economic costs vs. rewards of building tools which can fabricate at those scales.

    A case in point (though still bulk technology) is the growth of MEMS, which has become economic by leveraging the silicon chip fabrication technologies which already existed to reduce the NRE.
    It is now sustainable as it has produced viable products such as the Analog Devices accelerometers used in the miltary systems and then the Segway, Wiimote etc. which provides an ongoing revenue stream and shows that this type of technology is possible and useful.

    Similarly the Apollo Program, which was not economically viable, but was funded by Government for non-monetary rewards.

  4. dark_daedalus says:

    “Basically his argument is that simulation tools are necessary for engineering and you can’t simulate quantum systems”.

    Yeess.. But Engineering is not a pure science, like Physics.

    An engineering simulation model is a heuristic simplification of physical reality and most incorporate numeric models/corrections based on experimental data collected from prior work or prototypes.

    If you can’t simulate part of the system (or it is too expensive to do so), then you build a number of prototype variants of the sub-assembly, test and characterise them and select the best.

    With a mechanosynthesis based Emulation i.e. Physical Simulator (Not QC, but a nanoscale test bench) system, component variants could be automatedly explored and characterised fairly quickly.

    The best variants become part of a library of design components with datasheets, known specifications etc.
    used as part of future designs.

    In this view, Engineering is a system of directed evolution for technology with iterative loops, rather than a pure Design-Simulate-Build waterfall flow.

  5. physicsandcake says:

    I think these are all very good points. Ian’s point about the classical-quantum crossover is extremely important.

    Below the classical-quantum transition level you can’t simulate anything currently without solving the full Scrondinger equation or performing quantum Monte Carlo methods. However, this is only tractable for small systems (in the case of SE, even with perturbation theory, extremely small). With SC flux qubits we get a bit of an easy ride, due to the ‘amplification’ of the wavefunction to macroscopic levels in the condensate. However it’s not that easy to even get those working 😉

    So in macroscopic quantum phenomena, there exists a ‘dead-spot’ in scale somewhere between our ability to simulate quantum systems (a few qubits/atoms) and our ability to describe them fully classically. I think that’s a very important area and one that quantum computers will be able to help with, because then you can push your lower bound up by using a QC simulator. In fact by definition you will push up your lower bound by building a QC as you will need more qubits than the number of variables you are trying to simulate, so you will have already proved that you can build a quantum system that large. I’m going to stop here with that train of thought, lest I get into all kinds of dodgy water involving fundamental limits to coherence based on metrics such as mass, distance, no of particles, Penrose Hypothesis etc. My head hurts already and I haven’t had enough coffee for that debate 😛

    Dave I agree with your point about design by evolution and trial and error, however the problem is that you still apply some classical intuition even during this process ‘Hmmm, my nanomotor is wobbling a bit, lets move a few of these atoms this way a bit, that might balance it more’ – there’s no real analogy in quantum systems, you can’t really ‘guess’ what making a change will do to a quantum system.

    The only time where you don’t need this intuition is when you run GA type algorithm for design, but in order to do this you need your design-build-testfitness-mutate-build-testfitness… cycle to be quick, and even then you need a good initial guess in the first place.

    For quantum systems, maybe an evolution design process will be better in the first instance (for example in quantum control circuits with noisy qubits and quantum neural networks). On the other hand, I don’t think there’s any fundamental barrier to developing such a ‘quantum design intuition’, we just don’t have enough macroscopic quantum systems to ‘play with’ yet. Hopefully something I’ll help toward fixing 😉

    I love musing about quantum-classical crossover and ‘Schrodinger Cat’ states, so maybe it’s something I might post more about in future.

  6. physicsandcake says:

    “Lots of things exist that we can’t build”…

    “I would agree, but this is not because they are impossible i.e. prohibited by Physics, but due to the economic costs vs. rewards of building tools which can fabricate at those scales.”

    I think it’s also a technology maturity problem. With all our current technology we probably still couldn’t perform nuclear transmutation, no matter how much resource/money we pumped into such a project, but we might be able to with a technological breakthrough.

  7. dark_daedalus says:

    Since my detailed knowledge of Quantum Physics would fit in a matchbox without first removing the matches :-), I will defer to your and Geordie’s knowledge on QC etc. and defacto have to agree.

    It’s just that since the Michio Kaku lecture, I have got a bit kinky about the use of the word “impossible”.. when we really mean “not possible with current technology” or “not possible economically” etc.

    EoC talks about the first generation of assembler systems in a way which is similar to what I have read the history of computer development from 1920 onwards..

    Once you have one nanoassembler, you can make more with it.. it’s making the first general purpose nanoassembler that is the challenge.

    The technology to do this will be different from the nanoassembler bootstrapping of later on… in the same way that the first computers of pencil and paper design, hardwiring, relays, thermonic valves and experimental trial and error were produced differs from today’s CPU design which uses computers to simulate new CPUs…

    P.S. Thermionic valves are still used to this day as Klystron and TWTA.. so no technology is ever obselete, just more and more niche.

  8. Geordie, I really think you’re mis-stating Smalley’s objections. His point wasn’t that we can’t simulate quantum systems well enough to achieve the control needed to assemble, e.g., arbitrary diamond structures. Rather, his point was that you just can’t get around basic physical chemistry. There are many examples of quantum systems that are sufficiently simple that we can simulate them well – that doesn’t mean that you can then make them do arbitrary things. Believe me, as someone who actually works on atomic-scale devices, I can tell you that having quantum computers, while nice, would not actually make engineering at those scales much simpler.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s