Pavlov’s AI – What did it mean?

So recently I gave a talk at the H+ Summit in Los Angeles. However, I got the impression that the talk, which was about the fundamentals of Artificial General Intelligence (something I decided to call ‘foundations of AGI’) was not fully understood. I apologize to anyone in the audience who didn’t quite ‘get’ it, as the blame must fall upon the speaker in such instances. Although, in my defense, I had only 15 minutes to describe a series of arguments and philosophical threads that I had been musing over for a good few months 🙂

If you haven’t seen the talk, and would like to watch it, here it is:

However, this article is written as a standalone resources, so don’t worry if you haven’t seen the talk.

What I would like to do is start exploring some of those issues on this blog. So, here is my attempt to describe the first of the points that I set out to try and explore in the talk. I’ve used a slightly modified argument, to try and complement the talk for those who have already seen it.

————————————————————————————————–

Pavlov’s AI:
What do superintelligences really want?


S. Gildert November 2010

(Photo © Thomas Saur)

.

Humans are pretty intelligent. Most people would not argue with this. We spend a large majority of our lives trying to become MORE intelligent. Some of us spend nearly three decades of our lives in school, learning about the world. We also strive to work together in groups, as nations, and as a species, to better tackle the problems that face us.

Fairly recently in the history of man, we have developed tools, industrial machines, and lately computer systems to help us in our pursuit of this goal. Some particular humans (specifically some transhumanists) believe that their purpose in life is to try and become better than human. In practice this usually means striving to live longer, to become more intelligent, healthier, more aware and more connected with others. The use of technology plays a key role in this ideology.

A second track of transhumanism is to facilitate and support improvement of machines in parallel to improvements in human quality of life. Many people argue that we have also already built complex computer programs which show a glimmer of autonomous intelligence, and that in the future we will be able to create computer programs that are equal to, or have a much greater level of intelligence than humans. Such an intelligent system will be able to self-improve, just as we humans identify gaps in our knowledge and try to fill them by going to school and by learning all we can from others. Our computer programs will soon be able to read Wikipedia and Google Books to learn, just like their creators.

A perfect scientist?

But the design of our computer programs can be much more efficient in places where we, as humans, are rather limited. They will not get ‘bored’ in mathematics classes. They will work for hours on end, with no exhaustion, no fatigue, no wandering thoughts or daydreams. There would be no need for such a system to take a 2-hour lunch break, to sleep, or to worry about where its next meal will come from. The programs will also be able to analyze data in many more interesting ways than a human could, perhaps becoming a super-scientist. These programs will be far greater workers, far greater scholars, perhaps far greater citizens, than we could ever be.

It will be useful in analyzing the way such a machine would think about the world by starting with an analysis of humans. Why do humans want to learn things? I believe it is because there is a reward for doing so. If we excel in various subjects, we can get good jobs, a good income, and time to spend with others. By learning about the way the world works and becoming more intelligent, we can make our lives more comfortable. We know that if we put in the hard work, eventually it will pay off. There seem to be reward mechanisms built into humans, causing us to go out and do things in the world, knowing that there will be a payoff. These mechanisms act at such a deep level that we just follow them on a day-to-day basis – we don’t often think about why they might be there. Where do these reward mechanisms come from? Let’s take an example:

Why do you go to work every day?
To make money?
To pay for the education of your children?
To socialize and exchange information with your peers?
To gain respect and status in your organization?
To win prizes, to achieve success and fame?

I believe that ALL these rewards – and in fact EVERY reward – can be tied back to a basic human instinct. And that is the instinct to survive. We all want to survive and live happily in the world, and we also want to ensure that our children and those we care about have a good chance of surviving in the world too. In order to do this, and as our society becomes more and more complex, we have to become more and more intelligent to find ways to survive, such as those in the list above. When you trace back through the reasoning behind each of these things, when you strip back the complex social and personal layers, the driving motivations for everything we do are very simple. They form a small collection of desires. Furthermore, each one of those desires is something we do to maximize our chance at survival in the world.

So all these complex reward mechanisms we find in society are built up around simple desires. What are those desires? Those desires are to eat, to find water, to sleep, to be warm and comfortable, to avoid pain, to procreate and to protect those in our close social group. Our intelligence has evolved over thousands of years to make us better and better at fulfilling these desires. Why? Because if we weren’t good at doing that, we wouldn’t be here! And we have found more and more intelligent ways of wrapping these desires in complex reward mechanisms. Why do we obfuscate the underlying motivations? In a world where all the other members of the species are trying to do the same thing, we must find more intelligent, more complex ways of fulfilling these desires, so that we can outdo our rivals. Some of the ways in which we go about satisfying basic desires have become very complex and clever indeed! But I hope that you can see through that veil of complexity, to see that our intelligence is intrinsically linked to our survival, and this link is manifested in the world as these desires, these reward mechanisms, those things that drive us.

Building intelligent machines

Now, after that little deviation into human desires, I shall return to the main track of this article! Remember earlier I talked about building machines (computer systems) that may become much more intelligent than we are in the future. As I mentioned, the belief that this is possible is a commonly held view. In fact, most people not only believe that this is possible, but that such systems will self-improve, learn, and boost their own intelligence SO QUICKLY that once they surpass human level understanding they will become the dominant species on the planet, and may well wipe us out in the process. Such scenarios are often portrayed in the plotlines of movies, such as ‘Terminator’, or ‘The Matrix’.

I’m going to argue against this. I’m going to argue that the idea of building something that can ‘self-improve’ in an unlimited fashion is flawed. I believe there to be a hole in the argument. That flaw is uncovered when we try to apply the above analysis of desires and rewards in humans to machine intelligences. And I hope now that the title of this article starts to make sense – recall the famous experiments done by Pavlov [1] in which a dog was conditioned to expect rewards when certain things happened in the world. Hence, we will now try to assess what happens when you try to condition artificial intelligences (computer programs) in a similar way.

In artificial intelligence, just as with humans, we find that the idea of reward crops up all the time. There is a field of artificial intelligence called reinforcement learning [2], which is the idea of teaching a computer program new tricks by giving it a reward each time it gets something right. How can you give a computer program a reward? Well, just as an example, you could have within a computer program a piece of code (a mathematical function) which tries to maximize a number. Each time the computer does something which is ‘good’, the number gets increased.

The computer program therefore tries to increase the number, so you can make the computer do ‘good things’ by allowing it to ‘add 1’ to its number every time it performs a useful action. So a computer can discover which things are ‘good’ and which things are ‘bad’ simply by seeing if the value of the number is increasing. In a way the computer is being ‘rewarded’ for a good job. One would write the code such that the program was also able to remember which actions helped to increase its number, so that it can take those actions again in the future. (I challenge you to try to think of a way to write a computer program which can learn and take useful actions but doesn’t use a ‘reward’ technique similar to this one. It’s actually quite hard.)

Even in our deepest theories of machine intelligence, the idea of reward comes up. There is a theoretical model of intelligence called AIXI, developed by Marcus Hutter [3], which is basically a mathematical model which describes a very general, theoretical way in which an intelligent piece of code can work. This model is highly abstract, and allows, for example, all possible combinations of computer program code snippets to be considered in the construction of an intelligent system. Because of this, it hasn’t actually ever been implemented in a real computer. But, also because of this, the model is very general, and captures a description of the most intelligent program that could possibly exist. Note that in order to try and build something that even approximates this model is way beyond our computing capability at the moment, but we are talking now about computer systems that may in the future may be much more powerful. Anyway, the interesting thing about this model is that one of the parameters is a term describing… you guessed it… REWARD.

Changing your own code

We, as humans, are clever enough to look at this model, to understand it, and see that there is a reward term in there. And if we can see it, then any computer system that is based on this highly intelligent model will certainly be able to understand this model, and see the reward term too. But – and here’s the catch – the computer system that we build based on this model has the ability to change its own code! (In fact it had to in order to become more intelligent than us in the first place, once it realized we were such lousy programmers and took over programming itself!)

So imagine a simple example – our case from earlier – where a computer gets an additional ‘1’ added to a numerical value for each good thing it does, and it tries to maximize the total by doing more good things. But if the computer program is clever enough, why can’t it just rewrite it’s own code and replace that piece of code that says ‘add 1’ with an ‘add 2’? Now the program gets twice the reward for every good thing that it does! And why stop at 2? Why not 3, or 4? Soon, the program will spend so much time thinking about adjusting its reward number that it will ignore the good task it was doing in the first place!
It seems that being intelligent enough to start modifying your own reward mechanisms is not necessarily a good thing!

But wait a minute, I said earlier that humans are intelligent. Don’t we have this same problem? Indeed, humans are intelligent. In fact, we are intelligent enough that in some ways we CAN analyze our own code. We can look at the way we are built, we can see all those things that I mentioned earlier – all those drives for food, warmth, sex. We too can see our own ‘reward function’. But the difference in humans is that we cannot change it. It is just too difficult! Our reward mechanisms are hard-coded by biology. They have evolved over millions of years to be locked into our genes, locked into the structure of the way our brains are wired. We can try to change them, perhaps by meditation or attending a motivational course. But in the end, biology always wins out. We always seem to have those basic needs.

All those things that I mentioned earlier that seem to limit humans – that seem to make us ‘inferior’ to that super-intelligent-scientist-machine we imagined – are there for a very good reason. They are what drive us to do everything we do. If we could change them, we’d be in exactly the same boat as the computer program. We’d be obsessed with changing our reward mechanisms to give us more reward rather than actually being driven to do things in the world in order to get that reward. And the ability to change our reward mechanisms is certainly NOT linked to survival! We quickly forget about all those things that are there for a reason, there to protect us and drive us to continue passing on our genes into the future.

So here’s the dilemna – we either hard code reward mechanisms into our computer programs – which means they can never be as intelligent as we are – they must never be able to see or adjust those reward mechanisms or change them. The second option is that we allow the programs full access to be able to adjust their own code, in which case they are in danger of becoming obsessed with changing their own reward function, and doing nothing else. This is why I refer to as humans being self-consistent – we see our own reward function but we do not have access to our own code. It is also the reason why I believe super-intelligent computer programs would not be self-consistent, because any system intelligent enough to understand itself AND change itself will no longer be driven to do useful things in the world and to continue improving itself.

In Conclusion:

In the case of humans, everything that we do that seems intelligent is part of a large, complex mechanism in which we are engaged to ensure our survival. This is so hardwired into us that we do not see it easily, and we certainly cannot change it very much. However, superintelligent computer programs are not limited in this way. They understand the way that they work, can change their own code, and are not limited by any particular reward mechanism. I argue that because of this fact, such entities are not self-consistent. In fact, if our superintelligent program has no hard-coded survival mechanism, it is more likely to switch itself off than to destroy the human race willfully.

Footnote:

As this analysis stands, it is a very simple argument, and of course there are many cases which are not covered here. But that does not mean they have been neglected! I hope to address some of these problems in subsequent posts, as including them here would make this article way too long.

[1] – Pavlov’s dog experiment – http://en.wikipedia.org/wiki/Classical_conditioning

[2] – Reinforcement Learning – http://en.wikipedia.org/wiki/Reinforcement_learning

[3] – AIXI Model, M Hutter el el. – http://www.hutter1.net/official/index.htm

Advertisements

The Physics World is my Oyster

Physics and Cake got a mention in Physics World this month! 🙂 As a long time reader of Physics World, I’m really happy to see this! I guess this means I’ll have to blog more about Physics and less about the speculative promises and hidden possibilities of Artificial General Intelligence… (especially as AGI apparently didn’t make the transcription below). Though I ‘m afraid I cannot currently shake my desire to explore the intersection between AGI and Physics!

Hmm, looking at this post in the browser is oddly fractal! Though not quite enough to become a Strange Loop. (H/T Douglas Hofstadter, you are awesome).

Transhumanism and objectivity: An introduction

I have been involved in the transhumanism community for a fair while now, and I have heard many arguments arising from both proponents and skeptics of the ‘movement’. However, many of these arguments seem to stem from instinctive reactions rather than critical thinking. Transhumanism proponents will sometimes dogmatically defend their assumptions without considering whether or not what they believe may actually be physically possible. The reasoning behind this is fairly easy to understand: Transhumanism promises escape from some of humanity’s deepest built in fears. However, the belief that something of value will arise if one’s assumptions are correct can often leave us afraid to question those assumptions.

I would currently class myself as neither a proponent or a skeptic of the transhumanism movement. However I do love to explore and investigate the subject, as it seems to dance close to the very limits of our understanding of what is possible in the Universe. Can we learn something from analyzing the assumptions upon which this philosophical movement is based? I would answer not only yes, but that to do so yields one of the most exciting applications of the scientific method that we have encountered as a society.

I find myself increasingly drawn toward talking about how we can explore transhumanism from a more rational and objective point of view. I think all transhumanists should be obliged to take this standpoint, to avoid falling into a trap of dogmatic delusion. By playing devil’s advocate and challenging some of the basic tenets and assumptions, I doubt any harm can be done. At the least those tenets and assumptions will have to be rethought. But moreover, we may find that the lessons learned from encountering philosophical green lights and stop signs may inform the way we steer our engineering of the future.

I’ve thus decided to shift the focus of this blog a little towards some of these ideas. In a way I have already implemented some of this shift: I have written a couple of essays and posts before. But from now on, expect to see a lot more of this in the future. A blog format is an excellent way of disseminating information on this subject: It is dynamic, and can in principle reach a large audience. I also think that it fits in well with the Physics and Cake ethos – applying the principles of Physics to this area will form a large part of the investigations. And, of course, everything should always be discussed over coffee and a slice of cake! Another advantage is that this is something that everyone can think about and contribute to. You don’t need an expensive lab or a PhD in theoretical Physics to muse over these issues. In a lot of cases, curiosity, rationality, and the patience to follow an argument is all that is necessary.

AGI is number 1!

Interesting article on the Lifeboat Foundation website about top-ten transhumanist technologies:

Top Ten Tranhumanist Technologies

Ooh, the alliteration 🙂 I notice that AGI is at number 1. Let’s hope that we can actually work towards a good definition of AGI and solve some foundational issues along the way to keep it up there! Interestingly I think a few of these technologies have interesting crossovers, such as virtual reality, mind uploading and cybernetics. The more we advance progress in technology, the more these disciplines will become indistinguishable.

Anyway, I think this is a nice article as it gives an introduction to many of the things that transhumanists talk about over coffee and cake (or water and fruit if you are a Paleo).

Humanity+ Conference 2010 Caltech

I gave a presentation yesterday at the H+ conference at Caltech. The session in which I spoke was the ‘Redefining Artificial Intelligence’ session. I’ll try to get the video of the talk up here as soon as possible along with slides.

Other talks in this session were given by Randal Koene, Geordie Rose, Alex Peake, Paul Rosenbloom, Adrian Stoica, Moran Cerf and Ben Goertzel.

My talk was entitled ‘Pavlov’s AI: What do superintelligences really want?’ I discussed the foundations of AGI, and what I believe to be a problem (or at least an interesting philosophical gold-seam) in the idea of building self-improving artificial intelligences. I’ll be writing a lot more on this topic in the future, hopefully in the form of essays, blogposts and papers. I think it is very important to assess what we are trying to do in the area of AI, what the overall objectives are, and looking at what we can build from an objective point of view is helpful in framing our progress.

The conference was livestreamed, which was great. I think my talk had around 500 viewers. Add to that the 200 or so in the lecture hall; 700 is a pretty big audience! Some of talks had over 1300 remote viewers. Livestreaming really is a great way to reach a much bigger audience than is possible with real-life events alone.

I didn’t get to see much of the Caltech campus, but the courtyard at the Beckman Institute where the conference was held was beautiful. I enjoyed the fact that coffee and lunch was served outside in the courtyard. It was very pleasant! Sitting around outside in L.A. in December was surprisingly similar to a British summer!

I got to talk to some great people. I enjoy transhumanism-focused conferences as the people you meet tend to have many diverse interests and multidisciplinary backgrounds.

I was very inspired to continue exploring and documenting my journey into the interesting world of AGI. One of the things I really love doing is looking into the fundamental science behind Singularity-focused technologies. I try to be impartial to this and give both an optimistic account of the promise of future technologies whilst maintaining a skeptical curiosity about whether such technologies are fundamentally possible, and what roadmaps might lead to their successful implementation. So stay tuned for more Skepto-advocate Singularity fun!

New scheduling for ‘Thinking about the Hardware of thinking’

I was scheduled to give a live virtual seminar, streamed to the Transvision conference in Italy on October 23rd. Unfortunately I was not able to deliver the presentation due to technical problems at the conference venue.

But the good news is, I will be giving the talk this weekend instead!

Here is the abstract (slightly updated as the talk will be a little longer than originally planned)

Thinking about the hardware of thinking:
Can disruptive technologies help us achieve uploading?

S. Gildert,
Teleplace, 28th November 2010
10am PST (1pm EST, 6pm UK, 7pm continental EU).

We are surrounded by devices that rely on general purpose silicon processors, which are mostly very similar in terms of their design. But is this the only possibility? As we begin to run larger and more brain-like emulations, will our current methods of simulating neural networks be enough, even in principle? Why does the brain, with 100 billion neurons, consume less than 30W of power, whilst our attempts to simulate tens of thousands of neurons (for example in the blue brain project) consumes tens of KW? As we wish to run computations faster and more efficiently, we might we need to consider if the design of the hardware that we all take for granted is optimal. In this presentation I will discuss the recent return to a focus upon co-design – that is, designing specialized software algorithms running on specialized hardware, and how this approach may help us create much more powerful applications in the future. As an example, I will discuss some possible ways of running AI algorithms on novel forms of computer hardware, such as superconducting quantum computing processors. These behave entirely differently to our current silicon chips, and help to emphasize just how important disruptive technologies may be to our attempts to build intelligent machines.

Here is a link to the Teleplace announcement.

Hope to see you there!

The ‘observer with a hammer’ effect

Here is another short essay about quantum mechanics-related stuff. It’s a very high level essay, so any practising quantum physicists probably shouldn’t read it 😉 It is more aimed at a general audience (and news reporters!) and talks about the ‘spooky’ and ‘weird’ properties of superposition and decoherence that people seem to like to tie in with consciousness, cats, and ‘the observer effect’. It doesn’t really go into entanglement directly, I think that should be an issue for a separate post! It is also a fun introduction to some issues when trying to perform experimental quantum computing and quantum physics in general.

I’ve also put this essay in the Resources section as a permanent link.

.
The not-so spooky after all ‘observer-with-a-hammer’ effect

S. Gildert November 2010

I’m so sick of people using phrases like this:

“Looking at, nay, even thinking about a quantum computer will destroy its delicate computation. Even scientists do not understand this strange and counter-intuitive property of quantum mechanics”

or worse:

“The act of a conscious observer making a measurement on a quantum computer whilst it is performing a calculation causes the wavefunction to collapse. The spooky nature of these devices means that they just don’t work when we are looking at them!”

ARGGHHHHH!!!!!!!!

These kind of phrases spread like viral memes because they are easy to remember and they pique people’s curiosity. People like the idea of anthropomorphizing inanimate systems. It makes them seem unusual and special. This misunderstanding, the idea that a quantum system somehow ‘cares’ or is emotionally sensitive to what a human is doing, is actually what causes this meme to perpetuate.

So I’m going to put a new meme out there into the-internet-ether-blogosphere-tubes. Maybe someone will pick up on this analogy and it will become totally viral. It probably won’t, because it seems pretty dull in comparison to spooky ethereal all-seeing quantum systems, but if it flicks a light switch in the mind of but a single reader, if on contemplating my words someone’s conceptual picture of quantum mechanics as a mystical, ever elusive resource is reduced even by the tiniest amount, then my work here will be done.

Memetic surgery

Let’s start by cutting the yukky tumorous part from this meme and dissecting it on our operating table:

“Looking at a quantum system changes it.”

Now I don’t necessarily disagree with this statement, but I think you need to define what you mean by ‘looking’….

Usually when physicists ‘look’ at things, they are trying to measure something to extract information from it. To measure something, you need to interact with it in some way or other. In fact, everything in the world interacts with many other things around it (that’s why Physics is interesting!). Everything one could ever wish to measure is actually sitting in a little bath of other things that are constantly interacting with it. Usually, we can ignore this and concentrate on the one thing we care about. But sometimes this interacting-background property can cause unwanted problems.

Measuring small things

Brownian motion can give us a nice example of a nasty background interaction. Imagine that a scientist wanted to investigate the repulsion (or attraction) of some tiny magnetic particles in a solution that had just precipitated out of an awesomely cool chemical reaction. (I don’t know why you’d want to do this, but scientists have some weird ideas). So she starts to take measurements of the positions of the little magnetic particles over time, and finds that they are not obeying the laws of magnetism. How dare they! What could be wrong with the experiment? So our good scientist takes the solution in her beaker and you start to adjust various parameters to try and figure out what is going on. It turns out that when she cools the solution, the particles start to behave more in line with what is expected. She figures that the Brownian motion – all the other molecules jostling and wiggling around near the magnetic particles – are actually kicking the experiment around, ruining the results. But by lowering the temperature, it is possible to stop the environment in which the particles sit from disturbing them as much.

In this example, the scientist was able to measure the positions of the particles with something like a ruler or a laser or some other cool technique, and it was fairly easy, even though the environment had become irritatingly convolved with our experiment. Once she had got around how to stop the interaction with the environment, then our experiment worked well.

Quantum systems are small, and small things are delicate. But quantum systems are so small that the environment, the ‘background-interaction’ around them, is no longer something that they, or we, can ignore. It pushes them around. In order to have a chance at engineering quantum systems, researchers have to isolate them carefully from the environment (or at least the bits of the environment that kick them around). Scientists spend a lot of time trying to stop the environment from interacting with their qubits. For example, superconducting processors need to be operated at very cold temperatures, in extremely low magnetic field environments. But I won’t digress into the experimental details. The main idea is that no matter how you build your quantum computer, you will have to solve this problem in some way or other. And even after all this careful engineering, the darn things still interact with the environment to some degree.

It gets worse

But with quantum systems, there is an extra problem. The problem is not just the environment. To illustrate this problem, I’ll propose another little story of the striving scientists.

Imagine that our scientists have developed a technique to measure the diameter of bird eggs using a robotic arm. The arm has a hand that grasps the eggs, measures them, and then displays the diameter on a neat built-in display. (Alternatively, you can Bluetooth the results to your iPhone, so the scientists tell me). Anyway, this robotic arm is so ridiculously precise that it can measure the diameter of eggs more accurately than any pair or vernier calipers, any laser-interferometer array or any other cool way of measuring eggs that has ever existed. The National Standards laboratories are intrigued.

However, there is a slight problem. Every time the robot tries to measure an egg, it breaks the darn thing. There is no way to get around this. The scientific breakthrough relating to the accuracy of the new machine comes from the fact that the robot squeezes the egg slightly. Try and change the way that the measurement is performed, and you just can’t get good results anymore. It seems that we just cannot avoid breaking the eggs. The interaction of the robot with the egg is ruining our experiment.

Of course, a robot-egg measuring system like this sounds ridiculous, but this is exactly the problem that we have with quantum systems. The measuring apparatus is huge compared to the quantum system, and it interacts with it, just like the pesky environment does. It pushes and squeezes our quantum system. The result is that anything huge that we use to try to perform a delicate measurement will break it. And worse still, we can’t just try to ‘turn it off completely’ like we could with the environment surrounding the particles in the solution. By the very nature of what we are trying to do, we need the measurement apparatus to interact with the qubits, otherwise how can we measure them? What a pain. We end up measuring a kind of qubit-environment-combination mess, just like trying to measure the diameter of a broken egg whose contents are running all over our robotic measurement apparatus.

I can’t stress enough how comparatively big and clumsy quantum measurement apparatus is. Whilst scientists are trying to build better measurement techniques that don’t have such a bad effect on quantum systems, ultimately you just can’t get around this problem, because the large-scale things that we care about are just not compatible with the small-scale of the quantum world.

This doesn’t mean that quantum computers aren’t useful. It just means that the information we can extract from such systems is not neat, clean and unique to the thing we were trying to measure. We have to ‘reconstruct’ information from the inevitable conglomerate that we get out of a measurement. In some cases, this is enough to help us do useful computations.

Hammering the message home

Nowhere here does one need to invoke any spookiness, consciousness, roles of the observer, or animal cruelty involving cats and boxes. In fact, the so-called ‘observer’ effect could perhaps be more appropriately termed the ‘observer-with-a-hammer’ effect. We take for granted that we can measure large classical systems, like the 0 or 1 binary states of transistors, without affecting them too much. But measuring a quantum system is like trying the determine the voltage states of a single transistor by taking a hammer to the motherboard and counting the number of electrons that ended up sticking to the end of it. It kind of upsets the computation that you were in the middle of. It’s not the observer that’s the problem here, it’s the hammer.

So, the perhaps-not-so-viral phraseology for one to take away from my relentless ranting is thus:

“When you try and measure a delicate quantum system with clumsy apparatus, you actually end up with a messy combination of both!”

Alternatively, you could say ‘you can’t make a quantum measurement without breaking a few eggs’ – But if that terrible pun sticks then I will forever be embarrassed.