Kelford Press Books
126 pages · Previewing first 10 pages
The Invisible Engineers ## How AI Is Building at the Scale of Atoms **Dr. Ananya Mehta** *A Kelford Press Original* --- *Where Words Find Their Home* --- **First published in 2026 by Kelford Press** © 2026 Kelford Press. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means — electronic, mechanical, photocopying, recording, or otherwise — without the prior written permission of the publisher, except for brief quotations in reviews or academic work. The information in this book is intended for educational purposes and does not constitute scientific, medical, or investment advice. While every effort has been made to ensure accuracy, nanotechnology and artificial intelligence are rapidly evolving fields; some details may have changed since the time of writing. ISBN 978-1-7396-2108-9 (Digital) ISBN 978-1-7396-2109-6 (Print) ISBN 978-1-7396-2110-2 (Audio) Cover design by Kelford Press Typeset in Cormorant Garamond and Lora **kelfordpress.com** --- ## Dedication *For the scientists who build what the rest of us cannot see.* *And for anyone who has looked at the world and suspected* *there was more happening than met the eye.* --- ## Contents 1. [The Room Where Atoms Move](#chapter-1-the-room-where-atoms-move) 2. [Designing Molecules Atom by Atom](#chapter-2-designing-molecules-atom-by-atom) 3. [The Self-Assembling Future](#chapter-3-the-self-assembling-future) 4. [Tiny Doctors](#chapter-4-tiny-doctors) 5. [Quantum Eyes](#chapter-5-quantum-eyes) 6. [Programmable Matter](#chapter-6-programmable-matter) 7. [The Energy Revolution Below](#chapter-7-the-energy-revolution-below) 8. [The Ethics of the Invisible](#chapter-8-the-ethics-of-the-invisible) Acknowledgements About the Author Also by Kelford Press --- # Chapter 1: The Room Where Atoms Move The cooling systems never stopped humming. That was the first thing visitors noticed about Laboratory 4.12 on the third floor of ETH Zurich's Department of Chemistry — not the racks of servers along the back wall, not the antiseptic smell of filtered air, not the six monitors arranged in a horseshoe at the central workstation. It was the hum. A low, persistent drone that vibrated through the floor tiles and settled behind the sternum, as though the building had a pulse. Dr. Lena Krause had long since stopped hearing it. On the evening of 17 November 2025, she sat at that horseshoe of screens, her left hand wrapped around a mug of cold peppermint tea, her right hovering above the keyboard. She was watching something she had never seen before — something, she would later tell a colleague in Basel, that made her forget to breathe. On the central monitor, a molecular structure rotated against a black background, rendered in the standard colours of computational chemistry: carbon in grey, nitrogen in blue, oxygen in red, bonds drawn as pale cylinders. The structure was roughly spherical, a cage of interlocking hexagonal and pentagonal rings, not unlike a Buckminster Fuller dome. But nested inside, held by precisely angled coordination bonds, sat a single atom of platinum — a bright silver sphere, conspicuously alone, like a pearl trapped in a lattice of wire. The molecule had been designed seventeen minutes earlier. Not by Krause. Not by any of the four postdocs who shared her laboratory. Not by any human chemist alive or dead. It had been generated by CAMOS-3, a generative AI system developed by ETH Zurich and the Swiss Federal Laboratories for Materials Science and Technology, trained on 4.2 million known molecular structures. Krause had given CAMOS-3 a set of constraints: design a cage molecule, no larger than 2.8 nanometres in diameter, capable of trapping a single platinum atom and releasing it in response to a specific trigger — a pulse of ultraviolet light at 365 nanometres. The system returned forty-seven candidates. Candidate 31 was the one now on her screen. She already knew, from the system's stability calculations, that the molecule was thermodynamically viable. The coordination geometry around the platinum centre was clean. The UV-responsive bonds — a pair of azobenzene switches embedded in the cage walls — were positioned so that their photo-isomerisation would distort the cage just enough to open a gap, letting the platinum escape. It was elegant. It was also, as far as Krause could determine from the Cambridge Structural Database, entirely novel. No chemist had published this architecture. No patent described it. CAMOS-3 had not retrieved it from memory. It had invented it. Krause picked up her phone and texted her group leader, Professor Markus Brenner, three words: *Come see this.* --- What happened in that Zurich laboratory was not automation. Automation is a machine performing a task a human has already designed — a robotic arm welding a car chassis, a programme sorting email. What CAMOS-3 did was qualitatively different. It explored a space of possible molecular structures so vast that no team of chemists, working for a thousand years, could have surveyed it. It navigated that space using patterns extracted from millions of existing molecules — patterns too subtle and too high-dimensional for any human mind to hold at once. And it arrived at something genuinely new. This is the story of what happens when artificial intelligence learns to design at the scale of atoms. It stretches from a lecture hall in Pasadena in 1959 to a supercomputer in Shenzhen in 2026, from the interior of a living cell to the surface of a silicon wafer, from a single platinum atom caged in a molecular lattice to the future of medicine, materials, and energy. But before we can understand where this convergence is heading, we need to understand the two rivers — nanotechnology and artificial intelligence — that flowed separately for decades and have only now begun to merge. ## Plenty of Room On the evening of 29 December 1959, the physicist Richard Feynman stood before several hundred scientists at the annual meeting of the American Physical Society, held that year at Caltech in Pasadena. His own Nobel Prize was six years away, but he was already famous for his brilliance, his irreverence, and his talent for making the abstruse feel vivid. The title of his after-dinner talk was modest and slightly peculiar: "There's Plenty of Room at the Bottom." Feynman's argument was deceptively simple. The laws of physics did not prevent humans from manipulating individual atoms. There was no fundamental barrier to building machines, circuits, and structures at the atomic scale. The problem was purely practical: we lacked the tools. But tools could be built. You could store the entire *Encyclopaedia Britannica* on the head of a pin. You could build computers vastly smaller and faster than anything then imaginable. You could construct tiny machines that operated inside the human body, repairing cells, clearing arteries, destroying tumours. "The principles of physics, as far as I can see, do not speak against the possibility of manoeuvring things atom by atom," Feynman told his audience. "It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big." The talk was visionary. It was also, for many years, ignored. The engineers and physicists of the 1960s and 1970s had other preoccupations — transistors to shrink, rockets to launch, reactors to build. Feynman's bottom was a thought experiment, a parlour trick of the imagination. It would take more than two decades for the first practical tools to arrive. In 1981, Gerd Binnig and Heinrich Rohrer, at IBM's Zurich Research Laboratory, invented the scanning tunnelling microscope. A needle with a tip sharpened to a single atom was brought within a nanometre of a surface. A voltage was applied. Electrons tunnelled across the gap — a quantum effect that occurs when particles pass through barriers they should not, classically, be able to cross. The tunnelling current was exquisitely sensitive to distance. By scanning the tip across a surface and measuring the current at each point, you could map the atomic terrain below. For the first time, humans could see individual atoms. Binnig and Rohrer won the Nobel Prize in Physics in 1986. But seeing atoms was only the beginning. In the autumn of 1989, Don Eigler, at IBM's Almaden Research Center in San José, discovered that the STM could do more than observe. By adjusting the voltage and bringing the tip close enough, he could pick up an atom, drag it across a surface, and set it down elsewhere. Eigler and his colleague Erhard Schweizer cooled a nickel surface to four kelvins, deposited thirty-five xenon atoms onto it, and over twenty-two hours nudged them one by one into position. When they finished, the atoms spelled three letters: I-B-M. The image, published in *Nature* in April 1990, became one of the most famous photographs in the history of science. Feynman had been right. Atoms could be moved. Structures could be built at the bottom. Meanwhile, K. Eric Drexler at MIT was thinking not about moving atoms but about building with them. In 1986, the same year Binnig and Rohrer won their Nobel, Drexler published *Engines of Creation: The Coming Era of Nanotechnology*. He imagined molecular assemblers: nanoscale machines capable of picking up individual atoms and bonding them according to instructions, like three-dimensional printers at the atomic scale. These assemblers could build anything — a diamond, a computer chip, a steak dinner — from atoms up, with zero waste. They could also copy themselves. Disease would be conquered by nanoscale robots patrolling the bloodstream. Material scarcity would end. It was magnificent. It was also, for practical purposes, a fantasy. Not because the physics was wrong — Drexler's core argument, like Feynman's, was grounded in thermodynamics and quantum mechanics — but because the engineering was impossibly hard. Building a molecular assembler required controlling chemical reactions with single-atom precision, in three dimensions, at room temperature, billions of times per second, without errors. The gap between Drexler's vision and actual capabilities in the 1990s was roughly that between Leonardo da Vinci's sketches of flying machines and the Apollo programme. And so nanotechnology advanced — but slowly, and in directions Feynman and Drexler had not foreseen. Chemists learnt to synthesise nanoparticles with unusual optical, electronic, and catalytic properties. Gold nanoparticles turned red. Silver nanoparticles killed bacteria. Carbon nanotubes proved stronger than steel and more conductive than copper. Quantum dots emitted light of different colours depending on their size, and found their way into television screens and medical imaging probes. These were genuine achievements. But they were achievements of *discovery* and *synthesis*, not of *design*. A chemist making gold nanoparticles was not placing atoms according to a blueprint. She was mixing reagents, controlling temperatures, adjusting pH, and hoping the laws of self-assembly would produce particles of roughly the right size. The process was closer to cooking than to architecture. The dream of engineering at the bottom — of building structures atom by atom according to a plan — remained out of reach. The tools existed to see atoms. The tools existed to move them, one by one, at cryogenic temperatures. But no tool existed to *design* at that scale: to look at a problem, imagine a molecular solution, and specify its structure down to the last atom. That tool, it turned out, was not a microscope or a manipulator. It was an algorithm. ## The Machines That Learnt to See Molecules The chapter of artificial intelligence that matters for our purposes begins around 2012, at the University of Toronto. That year, a neural network called AlexNet, designed by Krizhevsky, Sutskever, and Hinton, won the ImageNet visual recognition challenge by a margin so wide it ended the debate about whether deep learning was a serious approach to machine perception. AlexNet could look at a photograph and identify what was in it. It was not perfect. But it was vastly better than any previous system. What made deep learning work was not new theory. The mathematics of neural networks had been understood since the 1980s. What changed was scale. Larger networks, trained on larger datasets, using the parallel processing power of graphics cards originally built for video games — these brute-force ingredients, combined with a handful of architectural innovations, produced systems that could learn patterns of a complexity no human programmer could have specified by hand. By the late 2010s, deep learning had conquered image recognition, speech recognition, translation, and game playing. AlphaGo defeated the world champion of Go in 2016. GPT-3 generated fluent prose on almost any topic in 2020. But these were, in a sense, warm-up acts. The systems were learning from human-generated data and learning to mimic or surpass human performance on human tasks. The real shift came when researchers pointed these tools at the natural world. In July 2021, DeepMind released AlphaFold2, a system that could predict the three-dimensional structure of a protein from its amino acid sequence. This problem had tormented biochemists for half a century. Proteins are long chains of amino acids that fold into intricate shapes, and the shape determines the function. But predicting shape from sequence had proved fiendishly difficult. Cyrus Levinthal calculated in 1969 that a modest protein of 100 amino acids could adopt more configurations than there are atoms in the observable universe. A protein does not try them all — it folds in milliseconds, guided by chemical bonds and thermodynamics. But simulating that process had defeated the best supercomputers. AlphaFold2 solved it — not by simulating physics step by step, but by learning the relationship between sequence and structure from a database of known proteins. Its predictions were accurate to within an angstrom, roughly the diameter of a hydrogen atom. In 2022, DeepMind released predicted structures for over 200 million proteins. The work won the Nobel Prize in Chemistry in 2024 for Demis Hassabis and John Jumper, shared with David Baker for his complementary work on protein design. AlphaFold was proof that AI could learn the rules governing molecular structure — rules encoded in quantum mechanics, thermodynamics, and evolution — and apply them beyond any human expert's reach. If AI could predict a protein's shape, could it design new proteins? New drugs? New materials? The answer, which arrived with gathering speed through 2023, 2024, and 2025, was yes. Generative models — the same class of algorithms behind image generators like DALL-E — were adapted to work in molecular space. Instead of generating images of cats from photographs, these systems generated molecular structures from datasets of known molecules. A team at MIT, led by Professor Regina Barzilay, designed drug-like molecules with specified properties in minutes rather than the months required by traditional medicinal chemistry. Researchers at the Chinese Academy of Sciences built a model that designed catalytic nanoparticles optimised for splitting water into hydrogen and oxygen. A group at the Indian Institute of Science in Bangalore identified candidates for next-generation solar cells that no one had previously considered. In each case, AI did not merely speed up molecular design. It changed the nature of the process. Where human chemists had worked by intuition, analogy, and laborious trial and error, AI systems navigated vast mathematical spaces of possibility, guided by patterns learnt from millions of examples. They explored regions of chemical space that human chemists had never visited — not because those regions were
You've reached the preview limit
You've read 10 of 126 pages. Join Kelford Press to continue reading this book and every book we publish.
$3$0/month for 3 months
Join WaitlistOr buy this book for $4.99