ISTM #62: Killer Puppies & The End of the World
"Sometimes we must go too far to know how far we can go."
--Heinrich Böll, Author
China recently introduced robotic dog soldiers. The four-legged digitally powered critters are fed only with AI, require no feeding, make no mess, and can shoot and kill living people 200 yards away.
They say the robot dogs only fire when humans order them to, making their soldiers safer in urban combat situations. Personally, I don’t believe that’s what the Chinese are up to. I suspect they will use these robotic puppies as a new, improved way to suppress its 1.7 billion citizens without the messiness they experienced last time in Tiananmen Square.
Autonomous Warriors
While we discuss the ubiquitous world of AI, it seems to me that we are all standing at the base of a huge digital mountain, soaring above the clouds. We all know there is a summit and the general direction, but we don’t yet know exactly what it will be like when we get to the summit of this AI mountain. While we know we have a mountain to climb, we are not entirely certain of the best path upward.
We are all climbing as fast as we can go in our rush to get to who knows where?
Tech futurists are often overly creative in predicting what will happen several years from now. They can take considerable license in forecasting either a promising or challenging future because few of us will remember just who forecasted what. But the consensus is that the robots of today will soon be as antiquated as the floppy disks of yore.
Sometimes I wonder. I get thrilled and chilled by videos and descriptions of incredible robots of tomorrow, while the best I see in people’s homes is Roomba, the autonomous vacuum cleaner. While, futurists predict that nearly half the chores we perform today will soon be performed by household robots who will clean our homes and shop autonomously for groceries. They will wash our dishes, do laundry, and mop floors—until those dishes and floors become self-cleaning. self-cleaning.
With all this new free time, I imagine robots will also be used at home to entertain and educate us and our children, thus replacing a sizable portion of our texting and emailing
by having our robots communicate with each other.
Experts seem hesitant to predict how much robotic caregiving there will be, perhaps because humans will always need some level of personalized human care or supervision, not to mention companionship. I see no time in the future when I would prefer a robotic puppy (without a rifle) to our own Jesse, even if his persistent night barking in the yard almost makes us bonkers sometimes.
Don’t get me wrong. I embrace the coming Robotic Age and am without doubt that the coming Age of Robots will provide more benefits than problems for people.
It seems to me that the Achilles heel of all this robotic prediction is the part of us that robots will not be able to emulate for years to come. Those are the unique characteristics, such as passion, humor, and irony, that make us uniquely human. I think robots—and—for that matter—make us all so very human.
I see nothing in the next ten years or so that would make robots capable of the emotional intelligence, adaptability, and nuanced decision-making that make us all so very human. Robots may never fully understand why we smile or cry.
However, robots may soon be programmed to assist with certain aspects of caregiving, such as monitoring and reminding people to take their medications. They also may be able to select entertainment and educational programs customized to our tastes and capabilities. Robots can help to educate children who cannot attend school for reasons of health, learning ability or emotional handicaps.
Robot Flocks
Our rifle-toting Chinese robot dogs will be one of many early examples of how the military will use AI and robotic technologies. Throughout history, the military has continuously patrolled for new and improved ways to kill enemies and protect civilians. AI and robotic technologies will be used seamlessly and perpetually.
According to Defense News magazine, the US military is building autonomous drones to attack enemy defenses. They are increasingly integrated into human-machine formations on the battlefield. Some examples:
Robotic Combat Vehicles (RCVs) used for reconnaissance, electronic warfare, breaching missions, and direct combat are becoming increasingly autonomous. Call me old-fashioned, but I always get nervous when I learn about killing machines being authorized to choose who they kill.
Swarm Robotics are hordes of tiny robots that collaborate on common goals. The military is experimenting with drone swarms that can confuse enemy defenses, execute coordinated attacks, and gather extensive surveillance.
Counter-drones are systems that detect and neutralize enemy drones using electronic jamming and directed energy weapons to disable or destroy hostile drones .
Mine-clearing robots safely clear minefields and explosive hazards. Sometimes, they are used to deploy explosives to create safe paths for troops.
Each of these is still being refined in terms of increased autonomy, enhanced coordination, advanced payloads, and Human-Machine Integration (HMI).
In my research, I read extensively about superior firepower, digital armor, speed, maneuverability, sensors, and decision-making abilities. Still, the overwhelming focus was on the drive toward autonomous killing machines.
What could possibly go wrong? I could write a book on the topic, but others have come before me. I endorse you take a look at a couple of short stories that greatly influenced me in my college days. Try looking at The Machine Stops by E.M. Forster (1909) and The Last Question (1956) by Isaac Asimov, which is a personal favorite. They both still stand up, or so it seems to me.
Both these brilliant short stories depict futures in which humanity lives underground and is entirely dependent upon a giant machine that caters to all its needs. People interact through video communication and have lost touch with nature and the surface world. The story serves as a warning about overreliance on technology and losing our humanity.
In The Last Question, Asimov writes of a supercomputer containing all of Earth’s knowledge. It asks the ultimate question: “Is there a god?” Suddenly, the entire planet is struck by explosions, wildfires, lightning, and tsunamis. All of humanity is exterminated.
‘There is one now,” the machine replies as the book concludes. It is a lesson that seems to be worth taking into consideration.
Very often, history shows that ideas that are introduced as fiction become reality.
For example, there was a cold war between Russia and the US in the 1950s and 60s. Both sides kept escalating their weaponry so that both \ were capable of destroying the world many times over.
Right now, you can be sure that at least China and Russia are building autonomous weaponry that rivals the US's autonomous weaponry. I assume that any of the three could destroy humankind with AI-powered weapons at the drop of an algorithm.
It is well within the realm of possibility that one autonomous killing machine.
could decide to launch an attack because of misinterpreting data and attacks. Of course, the other country’s autonomous weapons will respond, and thus, an escalation begins that eliminates life on Earth in a very short time.
Now, I would agree that such a scenario is highly unlikely, but it is not impossible. In the Sci-Fi I read and watched back in college, I was introduced to space travel, nuclear submarines, talking robots, video calls, government eavesdropping, and environmental degradation.
All were fictions that are now realities.
The lesson I have learned is that yesterday’s fiction becomes tomorrow’s facts. This is why a robotic dog armed with a rifle scares Hell out of me, and it should do the same for you. The dog itself may pose only a minimal threat to humankind, but as the late Steve Jobs said when he first looked at an iPhone prototype, “It’s a start.”
+++++