Should we trust robot scientists?
The scientists mentioned in the title do not refer to the (presumably competent and honourable) scientists who study robots (or computers), but rather to computers (or robots) which pretend to act like scientists. Using the numerical simulation of turbulence as a guiding thread, this talk reviews the influence that the rapid evolution of computer power is having on scientific research. It is argued that it can be divided into three stages. In the earliest (`heroic') phase, simulations are expensive and can at most be considered as substitutes for experiments. Later, as computers grow faster and some meaningful simulations can be performed overnight, it becomes practical to use them as (`routine') tools to provide answers to specific theoretical questions. Finally, some simulations become `trivial', able to run in minutes, and it is possible to think of computers as `Monte Carlo' theory machines, which can be used to systematically pose a wide range of `random' theoretical questions, only to later evaluate which of them are interesting or useful. We will argue that, although apparently wasteful, this procedure has the advantage of being reasonably independence of received wisdom, and thus more able than human researchers to scape established paradigms. The rate of growth of computer power ensures that the interval between consecutive stages is about fifteen years, and, for some basic problems, we are starting to enter the third phase. Rather than offering conclusions, the purpose of the talk is to stimulate discussion on whether machine- and human-generated theories can be considered comparable concepts, and on how the challenges and opportunities created by our new computer `colleagues' can be made to fit into the traditional research enterprise.