A mini-philosophy of technology (3): Computers as our ultimate inferential prostheses

8 min

A mini-philosophy of technology (3): Computers as our ultimate inferential prostheses

If you’ve been following along with our philosophical journey, you’ll remember that we ended the last article with a pretty useful idea: the concept of “inferential prostheses.” These are tools that extend our ability to draw consequences from information, to figure things out, to know what to do. Language was the first and most fundamental one of these prostheses, but as you might have guessed, it wasn’t the last. In fact, the last few decades of technological progress have been all about building better and better inferential prostheses. The computer revolution—and everything that came with it, from smartphones to artificial intelligence—has given us tools that multiply our ability to draw consequences on a scale that would have seemed like magic to our grandparents. So let’s take a look at what’s actually happening inside those sleek little devices we carry in our pockets, and what it means for us as human beings.

computer revolution
Photo: camilo jimenez / Unsplash

Here’s the thing about modern computers: at their most basic level, they’re just machines that do really simple things really, really fast. “Really simple things” meaning arithmetic. Things like addition or subtraction —that’s it. A computer processor takes numbers, performs mathematical operations on them according to a set of rules (the program or algorithm), and spits out other numbers.

But here’s where it gets interesting. Those input numbers can represent anything. They can represent the pixels on your screen, the sound waves coming through your microphone, the temperature in your room, the position of a robotic arm, the sequence of a DNA strand. And those output numbers can be translated into anything too—they can become images, sounds, physical movements, answers to questions, predictions about the future.

So when you tap on your smartphone screen, what’s happening? Your tap gets converted into numbers. Those numbers get processed by algorithms—sets of mathematical rules. The resulting numbers get converted into actions: maybe sending a message to a friend, maybe opening an app, maybe adjusting your volume. And it all happens so fast that it feels instantaneous.

This is why calling computers “intelligent machines” isn’t really a metaphor. It’s a pretty literal description. Intelligence, in its most basic sense, is the ability to draw consequences from information. A sunflower turning toward the sun is drawing a consequence (light means grow this way). A squirrel remembering where it buried nuts is drawing a consequence (this spot means food). A person solving a math problem is drawing consequences. And a computer calculating your restaurant recommendation based on your past orders? Also drawing consequences. The difference is speed and scale, not kind.

We hear all the time that technology is progressing fast. But until you look at the actual numbers, it’s hard to grasp just how insane this progress has been. Let’s put it in perspective. In the last two hundred years, maximum travel speed has increased by about a hundred times. That’s impressive—from horse and carriage to high-speed trains, airplanes and spaceships. Human life expectancy has increased by something like two to three times. Average income in developed countries has increased by ten to twenty times. The amount of cargo a single ship can carry is about a thousand times larger than in Napoleon’s time. A farmer can plow about fifty times more land in a week than two centuries ago.

All of these are remarkable achievements, but they absolutely pale in comparison to what’s happened with computers. In the last fifty years alone, the processing power of computers—measured in FLOPS, or floating point operations per second—has increased by a factor of about one million. The most powerful computer today can do in one second what the most powerful computer of the early 1970s would have taken nearly two weeks to do. If similar progress had happened in other areas, we’d be expecting to live about 30 million years, we’d travel from Madrid to Barcelona in less than a second, and we’d all be as rich as Amancio Ortega, the founder of Zara.

One of the most prominent and recent examples are large language models. LLMs are trained on billions of examples of human language—books, articles, websites, conversations—and they learn to predict what word comes next in any given sequence. That sounds simple, but something remarkable emerges from it. By learning to predict words, these models also learn to understand context, to follow instructions, to answer questions, to write poetry, to explain complex ideas, to translate between languages, and yes, to have conversations about technology (are you sure this text has not been created by an AI?). Just a few years ago, if you’d told someone you could have a back-and-forth conversation with a machine that understood what you were asking and responded intelligently and with the nuances of a proficient natural language speaker, they’d have thought you were talking about science fiction.

Here’s something interesting about all this progress: most of it is designed to be invisible. Your smartphone is intentionally built to hide the insane complexity of what’s happening inside it. When you tap an icon, you don’t see the millions of calculations required to open an app. When you send a message, you don’t witness the journey of your words as they get chopped into packets, routed through servers, reassembled, and delivered. And when you chat with an AI, you don’t see the billions of parameters, the layers of neural networks, the massive training runs on supercomputers that made this conversation possible. It just feels natural—like talking to another person, or at least something vaguely person-shaped.

This is actually a lot like what happens in our own brains. You’re not consciously aware of the billions of neurons firing, the electrochemical signals racing along your nerves, the complex processing required to turn light patterns on your retina into the recognition of a friend’s face. You just see your friend and smile. The complexity is hidden so you can focus on what matters.

It’s tempting to look at the last fifty years and try to project the next fifty. If processing power keeps increasing at anything like the current rate, by the end of this century computers will be billions of times more powerful than they are today. The amount of data available for processing will be similarly astronomical. Even if the speed of increase slows by some factor, we probably cannot imagine what will come next. And I mean that literally—not as a figure of speech. The technologies of 2100 will likely be conceived, designed, and built by artificial intelligences that are themselves far more capable than any human engineer. We’ll be in the position of a chess novice watching Stockfish play: we can see that something brilliant is happening, but we can’t fully understand how or why. Think about it. Stockfish doesn’t just beat humans at chess—it plays in ways that humans find baffling. It makes moves that look like mistakes but turn out to be genius. It exploits patterns too subtle for any human to notice. Now imagine that kind of superiority applied to every field of human endeavor (or most of them): engineering, medicine, science, art. The results will be things we literally cannot conceive of today.

Remember Heidegger and Ortega from our first article? Heidegger worried that technology reduces everything to resources, including us. Ortega saw technology as the expression of our human project of self-invention. These new inferential prostheses—these computers and AI systems—bring that debate into sharp focus. Are we becoming resources for our own machines, feeding them data so they can get smarter and more useful? Or are we extending ourselves in ways that Ortega would celebrate, using these tools to become more fully human? The answer, as with most philosophical questions, is probably both. We are both shepherd and cyborg, both threatened and enhanced by our own creations.

What’s certain is that the inferential prostheses we’ve built are now so powerful that they’re starting to draw consequences we couldn’t draw ourselves. They’re finding patterns we couldn’t see, making predictions we couldn’t make, solving problems we couldn’t solve. And they’re doing it at speeds that make our own cognition look like molasses in January.

This doesn’t mean we’re obsolete. It means our relationship with our tools is changing. Language, our first prosthesis, didn’t make us less human—it made us more human. It opened up new worlds of thought, communication, and culture. These new prostheses will do the same, but in ways we’re only beginning to understand. The question isn’t whether to embrace them or reject them. The question is how to use them wisely—how to stay awake to their dangers while remaining open to their possibilities. Heidegger would have us step back and let things be. Ortega would have us step forward and keep inventing ourselves. After all, they’re our prostheses. They’re extensions of us. The future they’re building is, for better or worse, our future too.

Written by

Leave a Reply

Your email address will not be published.Required fields are marked *