HomeGroupsTalkMoreZeitgeist
Search Site
This site uses cookies to deliver our services, improve performance, for analytics, and (if not signed in) for advertising. By using LibraryThing you acknowledge that you have read and understand our Terms of Service and Privacy Policy. Your use of the site and services is subject to these policies and terms.

Results from Google Books

Click on a thumbnail to go to Google Books.

Army of None: Autonomous Weapons and the…
Loading...

Army of None: Autonomous Weapons and the Future of War (edition 2019)

by Paul Scharre (Author)

MembersReviewsPopularityAverage ratingMentions
2006135,482 (3.79)1
We are witnessing the evolution of autonomous technologies in our world. As in much of technological evolution, military needs drive much of this development. Peter Scharre has done a remarkable job to explain autonomous technologies and how military establishment embrace autonomy: past, present and future. A critical question: “Would a robot know when it is lawful to kill, but wrong?”

Let me jump to Scharre’s conclusion first: “Machines can do many things, but they cannot create meaning. They cannot answer these questions for us. Machines cannot tell us what we value, what choices we should make. The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” The author has done a remarkable job to explain what an autonomous world might look like.

Scharre spends considerable time to define and explain autonomy, here’s a cogent summary:
“Autonomy encompasses three distinct concepts: the type of task the machine is performing; the relationship of the human to the machine when performing that task; and the sophistication of the machine’s decision-making when performing the task. This means there are three different dimensions of autonomy. These dimensions are independent, and a machine can be “more autonomous” by increasing the amount of autonomy along any of these spectrums.”

These two quotes summarize some concerns about make autonomous systems fail-safe. (Spoiler alert: it can’t be done…)
“Failures may be unlikely, but over a long enough timeline they are inevitable. Engineers refer to these incidents as “normal accidents” because their occurrence is inevitable, even normal, in complex systems. “Why would autonomous systems be any different?” Borrie asked. The textbook example of a normal accident is the Three Mile Island nuclear power plant meltdown in 1979.”

“In 2017, a group of scientific experts called JASON tasked with studying the implications of AI for the Defense Department came to a similar conclusion. After an exhaustive analysis of the current state of the art in AI, they concluded: [T]he sheer magnitude, millions or billions of parameters (i.e. weights/biases/etc.), which are learned as part of the training of the net . . . makes it impossible to really understand exactly how the network does what it does. Thus the response of the network to all possible inputs is unknowable.”

Here are several passages capturing the future of autnomy. I’m trying to summarize a lot of the author’s work into just a few quotes:

“Artificial general intelligence (AGI) is a hypothetical future AI that would exhibit human-level intelligence across the full range of cognitive tasks. AGI could be applied to solving humanity’s toughest problems, including those that involve nuance, ambiguity, and uncertainty.”

““intelligence explosion.” The concept was first outlined by I. J. Good in 1964: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (This is also known as the Technological Singularity)

“Hybrid human-machine cognitive systems, often called “centaur warfighters” after the classic Greek myth of the half-human, half-horse creature, can leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence.”

In summary, “Army of None” is well worth reading to gain an understanding of how autonomous technologies impact our world, now and in the future. ( )
  brewbooks | Jul 8, 2019 |
Showing 6 of 6
This one was a tough read. Not because of subject but due to the structure of the work. It is like collection of separate works from various conferences and seminars collected into the book. Due to this, very similar to modern documentaries on TV, things tend to repeat and this seriously affects the reading pace.

I wont go into details here (title says it all :)) but author truly covers everything related to modern automated combat systems - from smart missiles to combat platforms (like AEGIS and anti aircraft systems and of course ubiquitous U(C)AVs) - starting from differentiating various systems and human role in them, role of AI (and thank God, author is realistic when it comes to general AI), plans and visions for developing the future war machines with everything coming from the horses mouth so to speak, opposition and reasons for opposition to development of autonomous war machines to legal implications and is it possible to ban development of autonomous war machines.

Some parts are straight out naive (like land-mine and cluster ammunition ban - land-mines are still used everywhere and cluster munitions are also used even by "responsible" parties (certain French shame incident related to delivery of cluster ammo in recent conflict shows this)) but in majority of text author is aware that constant mantra "but they might be doing it" will always cause the further development of more dangerous weapons. Unfortunately there is no way around this especially with weaponry that is not tested fully in combat (remember that gas is taboo now only because of effects of its use in WW1 where wind could disperse gas to friendly side in a matter of minutes - generally biological and chemical weapons have tendency to backfire). One thing annoyed me - constant talk about "responsible" armies (like US and Western armies) and "authoritarian" states (under quotes because these days this just means US and allies disapprove of given country, otherwise for example Arab countries would constantly be marked as such and shunned by same parties since the beginning of modern age) is kinda silly. As far as I know Agent Orange and various defoliants were used by "responsible" army and I have a feeling that families obliterated by UCAVs targeting assumed high-priority targets through the Middle East would not agree with that responsible attribute. Also would not you agree that sponsored secret bio chemical labs in certain countries do not show much responsibility.

Because lets be fair - every army fights to win and will use everything at its disposal to achieve victory. Everything else is side effect for analysts and theoreticians to build their academic careers and condemning something that is water under the bridge for at least a decade. And because of this autonomous weapons will arrive and remain on battlefields. Only thing that we can be grateful for is that these will be sturdy, relatively simple weapon systems with narrow intelligence for a specific action field for decades (if not centuries ahead). For no other reason than to alleviate the danger of loss of control.

For the above mentioned unnecessary political comments I take one star off.

Rest of the book is truly an excellent thorough analysis of current technological and application aspects of remotely controlled and autonomous combat systems and weapons. Considering the scope and level of information I think this might be one of the most complete (if not the most complete) book on the subject I have read.

Highly recommended. ( )
  Zare | Jan 23, 2024 |
This book takes a pretty comprehensive look at autonomous weapons; it's both a primer and a review if technical, tactical, strategic and moral and ethical issues concerning the use of automated weapons from human-guided systems to self-guided weapons. You could say "from stones to Skynet" because Scharre refers to the Terminator series regularly. Chunks of this book are so filled with techno-babble, military jargon and acronyms that they are difficult to read, which is most unfortunate given the nature of the weapons Scharre discusses and the issues those weapons raise. I made the effort to read them carefully and I am glad I did. ( )
  nmele | Jan 23, 2022 |
Autonomous weapons are no longer a matter of science fiction. They are among us and what's even more frightening, some of them can be bought relatively cheap or built quite easily.

Army of None is not just facts and figures. It's a tale about a world we live in. What we make of it is completely in our hands. For now. ( )
  jakatomc | Dec 27, 2020 |
Fascinating overview of the subject of autonomous weapons. The ethical parts of this are quite complex. Author spends a lot of time trying to propose pathways forward. Appreciate the effort, and maybe I am a cynic, but just kept thinking that every country is gonna build this crap. and its gonna get messy. Gonna be a keep up with the Jones's race. On a side note, the Sensor Fuzed Weapon. Who the f thought up something so crazy! OMG. ( )
  bermandog | May 16, 2020 |
This book, written by a non-technologist with extensive military experience, describes the intersection of artificial intelligence with United States military affairs. It uses terms like “autonomy” and “semi-autonomy” extensively. Autonomous weapons are weapons that can identify their own targets. Semi-autonomous weapons can track pre-identified targets (that is, targets previously identified by humans). Semi-autonomous weapons are currently in use; no autonomous weapons are known to be in use.

The line between these two is currently blurring. This is not due to Department of Defense (DARPA) research, but due to research in artificial intelligence (AI) in the commercial sphere. Computers are becoming “intelligent.” This book explores what that means and whether computers can be considered as “alive.” It does not take this excursion as an academic exercise but rather as an exploration into the future of warfare.

As a technologist, I found myself desiring more optimism in the author. My attitude towards AI is very positive and very inevitable. This author keeps admonishing the reader that humans must remain “in the loop” in military applications so that they can make the ultimate decision whether to go for a kill or not. Again, as a technologist, I see human involvement as more-or-less inevitable. We humans will find a way to make increasingly better use of artificial intelligence because that’s what we’ve done with other technologies throughout thousands of years of human history.

We must – must – continue to work. I’m not scared of what’s ahead. It’s an opportunity for people like me to continue to work and to impact the future. I’m much more scared of our prospects for the future if countries like the United States stop research on military applications and countries like Russia continue. The field of AI will continue to progress because of its promise in other applications. The only real question is to what extent the military will be “in the loop.” I’d rather us focus our energies rather than following a policy of appeasement towards those with a harsher track-record of human rights.

Overall, this book achieves its purpose and communicates its message clearly. Those interested in military affairs or technology should pay attention.
( )
  scottjpearson | Jan 25, 2020 |
We are witnessing the evolution of autonomous technologies in our world. As in much of technological evolution, military needs drive much of this development. Peter Scharre has done a remarkable job to explain autonomous technologies and how military establishment embrace autonomy: past, present and future. A critical question: “Would a robot know when it is lawful to kill, but wrong?”

Let me jump to Scharre’s conclusion first: “Machines can do many things, but they cannot create meaning. They cannot answer these questions for us. Machines cannot tell us what we value, what choices we should make. The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” The author has done a remarkable job to explain what an autonomous world might look like.

Scharre spends considerable time to define and explain autonomy, here’s a cogent summary:
“Autonomy encompasses three distinct concepts: the type of task the machine is performing; the relationship of the human to the machine when performing that task; and the sophistication of the machine’s decision-making when performing the task. This means there are three different dimensions of autonomy. These dimensions are independent, and a machine can be “more autonomous” by increasing the amount of autonomy along any of these spectrums.”

These two quotes summarize some concerns about make autonomous systems fail-safe. (Spoiler alert: it can’t be done…)
“Failures may be unlikely, but over a long enough timeline they are inevitable. Engineers refer to these incidents as “normal accidents” because their occurrence is inevitable, even normal, in complex systems. “Why would autonomous systems be any different?” Borrie asked. The textbook example of a normal accident is the Three Mile Island nuclear power plant meltdown in 1979.”

“In 2017, a group of scientific experts called JASON tasked with studying the implications of AI for the Defense Department came to a similar conclusion. After an exhaustive analysis of the current state of the art in AI, they concluded: [T]he sheer magnitude, millions or billions of parameters (i.e. weights/biases/etc.), which are learned as part of the training of the net . . . makes it impossible to really understand exactly how the network does what it does. Thus the response of the network to all possible inputs is unknowable.”

Here are several passages capturing the future of autnomy. I’m trying to summarize a lot of the author’s work into just a few quotes:

“Artificial general intelligence (AGI) is a hypothetical future AI that would exhibit human-level intelligence across the full range of cognitive tasks. AGI could be applied to solving humanity’s toughest problems, including those that involve nuance, ambiguity, and uncertainty.”

““intelligence explosion.” The concept was first outlined by I. J. Good in 1964: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (This is also known as the Technological Singularity)

“Hybrid human-machine cognitive systems, often called “centaur warfighters” after the classic Greek myth of the half-human, half-horse creature, can leverage the precision and reliability of automation without sacrificing the robustness and flexibility of human intelligence.”

In summary, “Army of None” is well worth reading to gain an understanding of how autonomous technologies impact our world, now and in the future. ( )
  brewbooks | Jul 8, 2019 |
Showing 6 of 6

Current Discussions

None

Popular covers

Quick Links

Rating

Average: (3.79)
0.5
1 1
1.5
2
2.5
3 5
3.5 3
4 12
4.5 2
5 3

Is this you?

Become a LibraryThing Author.

 

About | Contact | Privacy/Terms | Help/FAQs | Blog | Store | APIs | TinyCat | Legacy Libraries | Early Reviewers | Common Knowledge | 204,473,230 books! | Top bar: Always visible