Secret AI Robot Soldiers in Direct Conflict

Taffy Das
9 min readFeb 5, 2024

--

Note: If you would like to watch a video version of this article with more detail and visual examples, then click on the link below to watch it. Enjoy!

In this article, we’ll explore the vast, secretive ecosystem of Autonomous weapons, which are inevitable. The latest boom in AI has seen renewed interest in the field across several industries. One that few have been talking about but that has been bubbling under our noses is the Arms industry — AI-assisted lethal Weapons, also known as “slaughterbots” or “killer robots." They are designed to identify and eliminate targets without human intervention. Some of these weapons are ones you’ve never seen or heard of. It’s totally insane! If you’re ready to learn about all the crazy and complicated projects surrounding artificial intelligence, robots, and war, then so am I. Let’s dive in right away.

There’s an AI arms race going on right now. Some are calling it the Second Cold War, although critics think that’s propaganda, a narrative that will eventually affect effective AI collaboration among nations. The first nation to achieve complete AI dominance will rule them all; at least that’s the idea. The USA, Russia, and China are among the biggest countries in this race. Russian President Vladimir Putin stated, “Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.”. Those are very powerful and symbolic words. He also said that it’s better to collaborate on AI efforts rather than have one country monopolize the technology. China aims to be the world leader in AI by 2030 by creating a 150 billion-dollar industry, and the USA is still going stronger than ever in lethal autonomous weapon systems. Geopolitical and military tensions have been driving this AI arms race since the mid-2010s. The competition for superior AI isn’t just about military might; it’s also about economic and technological dominance.

The Sea Hunter is an example of a war machine from the US military. It’s an autonomous warship designed to operate at sea without a single crew member. It learns to navigate the water at different speeds and can identify objects on its own using AI. As early as February 2017, Russia was working on AI-guided missiles that could decide to switch targets mid-flight. Imagine that! Missiles that can follow a target anywhere. Combat drones that operate in clusters are already deployed on certain warfronts. Such systems can work alone or collaborate with humans at scale and with super-high precision. In all of this, there are obviously safety concerns. The race to develop advanced AI may lead to cutting corners on safety, resulting in increased algorithmic bias, or, in simpler terms, regrettable errors. There is also the possibility of losing control over AI systems, especially if they reach very sophisticated levels. Additionally, the concern of power and technological advantage being consolidated in the hands of one group could potentially threaten global stability. Because of these concerns, attempts to regulate autonomous weapons and AI in general are emerging. In 2015, over 26,000 citizens and AI researchers signed an open letter calling for the prohibition of lethal autonomous weapons systems. The list of signees included physicist Stephen Hawking, Tesla founder Elon Musk and Apple’s Steve Wozniak among others. With the way things are going, a complete ban is very unlikely. We are way past that stage.

As tensions rise among the superpowers, sanctions on technological innovations are being used as tools of negotiation and power. In the last couple of months, the US has prohibited the sale of chips and chip-making equipment like semi-conductors to China. TSMC, which is the world’s most valuable semiconductor company, accounts for over 50% of global sales, and Nvidia, the most dominant supplier of AI chips, are caught in the crosshairs. Their inability to sell to China greatly affects their overall profit margins. On the other hand, China is struggling to keep up with AI advancements as it has access to subpar AI technology. They are forced to scale homegrown chip-making in a short amount of time, which is no easy feat. So you see, there are a lot of chess pieces being moved around, and all of this is part of the race to AI Military dominance.

Let’s now take the time to talk about the shot callers in AI warfare and what kinds of stealth tech they have. DARPA is one such organization, if not THE organization. DARPA was formed in 1958 and is the research wing of the U.S. Department of Defense. Meaning they work on futuristic tech, some of which is declassified and some that will never see the light of day for a very long time. The department is responsible for developing emerging technologies for military use. It was created in response to the Soviet launch of Sputnik 1, the first artificial satellite successfully placed in orbit around the Earth. This triggered a Space race between both countries. DARPA is credited with innovations like weather satellites, GPS, drones, stealth technology, and even Moderna’s COVID-19 vaccine. It has played a crucial role in space projects, ballistic missile defense, nuclear test detection, and the creation of ARPANET, the basis for the Internet we all use today. All we can say is that, whatever you have seen in a sci-fi movie, DARPA has probably worked on it. From Shakey the robot in 1966 to the four-legged Cheetah that can outrun Usain Bolt, DARPA’s robotic projects are insane. They’ve worked on bullets that can change direction in flight, self-calculating gunscopes, drones that can stay airborne for years, and contact lenses that enhance awareness for soldiers. Like any other R&D department, some of these projects are successful and some are scrapped after a few years of development.

The US government also hands out contracts to private entities working on super-advanced systems that can put the military ahead of everyone else. Palantir is at the forefront of such government contracts. The company was founded in 2003 by American billionaire Peter Thiel, among others. They work on a number of projects, one of which is Palantir Gotham, used by counterterrorism analysts in the Department of Defense and the U.S. Intelligence Community. They also have a Pentagon contract known as Project Maven. This project was initially awarded to Google to use the company’s AI to analyze drone footage. But in 2018, employees protested renewing the contract, fearing it would lead to surveillance and unlawful data collection. Some employees prefer Google abandon military work altogether, citing concerns over transparency and potential harm. In April 2023, Palantir launched an Artificial Intelligence Platform (AIP) integrating large language models for military operations. It’s designed to integrate large language models like OpenAI’s GPT-4 into privately operated networks. But here’s the twist: they’re applying it to modern warfare! In a video demo, a military operator uses a ChatGPT-style digital assistant to deploy drones, strategize tactical responses, and jam enemy communications. It’s like having a virtual general at your fingertips! AIP’s operation is based on classified system deployment, user-controlled scope and actions, and industry-leading guardrails to prevent unauthorized actions. Though the human operator seems to follow AIP’s suggestions, there’s a “human in the loop” to prevent any mishaps. But the demo leaves some questions unanswered, like how they’ll stop the system from “hallucinating” facts. Obviously, the integration of AI into warfare raises crucial questions about ethics, legality, and the future of human-AI collaboration. Unlike other companies, Palantir is fully embracing the AI Military-Industrial Complex. Their CEO, Alex Karp, has boldly stated that if the US doesn’t continue to pursue advanced technology in war, then the country may be severely punished for it. What do you think about the ethics and legality of such systems? Many people are split on this approach; some refer to it as the modern-day Manhattan project. Coming fresh off the Oppenheimer movie, this leaves a bad taste in a lot of people’s mouths. However, if the US doesn’t develop such systems, it may fall behind competing superpowers, mainly China and Russia.

China, especially, is big on surveillance. China’s government is investing heavily in surveillance networks like Skynet and Sharp Eyes. With over 500 million surveillance cameras deployed by 2021, China accounts for more than half of the world’s surveillance cameras. That’s absolutely absurd. They’re using facial, voice, and gait recognition to identify citizens. All this surveillance is tied to a social credit system for residents. In terms of military prowess, China’s People’s Liberation Army (PLA) is harnessing the power of AI for warfare. This is cutting-edge stuff. Since 2009, China has been developing intelligent systems for aerial and marine warfare. They’ve pioneered the use of AI in target recognition and fire control. They’re distributing contracts for target detection, recognition algorithms, and multi-target fusion. China has several of its citizens working abroad, including at top firms in the USA. Some of them return home and lead innovative research projects. The country is still hungry for talent and continues to lure workers and contractors.

Unlike China, several cities and states have banned the use of facial recognition by law enforcement. Companies like Amazon and Microsoft have placed restrictions on selling the technology. Other countries are monitoring the situation, and even some are influenced. The U.S. wants to be more proactive in setting international AI and data standards that protect human rights and individual liberty. The challenge is getting everyone to agree across governments, corporations, and academics in order to build a responsible and thorough system.

We’ve only covered a fraction of all the technology being developed out there, specifically for the military. Billions of dollars are poured into the most outlandish systems. Corporations like Raytheon and Lockheed Martin are driving such projects for the military. Every AI advancement that comes out also contributes in some way to more adept, lethal weapons used in battle. Meta recently released its Segment Anything Model, which can identify objects in an image with a simple query. Advanced versions of this can be used to efficiently identify targets in videos, as we’ve already seen. Cicero is another mind-blowing AI from Meta that plays the game of Diplomacy and actually defeats humans. The AI combines advanced strategic reasoning and natural language to work effectively with humans. It can understand other players’ objectives, propose shared goals, and communicate with clear intent. The players in this scenario represent countries like Italy and France. Cicero predicts what moves other countries are likely to make. It’s all about understanding and outsmarting your opponents and their actions. In the future, this system can be used to advise heads of state on negotiating tactics with foreign countries, especially during times of heightened tension. I won’t go into too much detail here; I covered this in another video, which I’ll share in the descriptions below. Make sure to check that out too. Robots are getting better. Every fun Boston Dynamics demo brings us closer to more agile robots that can fully replace humans on the battlefield. Other companies make robots that can mimic animal movements and agility. I’m talking about robots that can fly like birds, swim like dolphins, and even move like ants. You can infiltrate enemy territory without them even knowing it. It’s crazy out there, and if you don’t pay attention, you wouldn’t notice all these advancements. Some cities, like New York and Dallas, have even deployed crime-fighting bots to assist in enforcing the law. I suspect more cities will adopt this, and eventually citizens will learn to live with robots walking the streets.

The case for robots used on battlefields is that they can be deployed in high-risk areas, reducing human casualties. These machines operate without fatigue, enhancing performance and productivity, and with advanced targeting systems, robots can minimize collateral damage. Less humans would lose their lives if autonomous agents were deployed. They can also be designed for various terrains and conditions, something human soldiers might struggle with. On the other hand, there are several cases against using robots in war. Who’s responsible if something goes wrong? Malicious forces may also hack or abuse autonomous agents. This leads to a tricky moral maze. Autonomous agents may fire at will, endangering innocent people. Many efforts have been made to fix this by having a human approve all lethal actions. Robots might lack the empathy and human reasoning that often guide critical decisions. Imagine trying to convince a robot or plead your case. It would fall on clunky metal deaf ears. AI-assisted war machines are complex, and there are still many unanswered questions. How do we balance the pursuit of technological advancement with the ethical implications of autonomous killing machines? What are the long-term consequences of an AI arms race for international relations and global stability? How will the leadership in AI shape the future of global economic and military power? At this point, it seems we have more questions than answers.

It’s a brave new world out there, and AI is at the forefront. The fusion of technology and military strategy will reshape the battlefield forever. It’s inevitable.

Thanks for reading!

Resources

Related videos:

Unreal AI Negotiates w/ YOU & WINS | DeepMind & Meta AI CICERO

An Uncensored History of DARPA

--

--

Taffy Das
Taffy Das

Written by Taffy Das

Check out more exciting content on new AI updates and intersection with our daily lives. https://www.youtube.com/channel/UCsZRCvdmMPES2b-wyFsDMiA

No responses yet