Home » Aviation » Essay: The Legal and Moral Problems of Autonomous Strike Aircraft


Essay: The Legal and Moral Problems of Autonomous Strike Aircraft

By:
Published: • Updated:
The Navy's unmanned X-47B lands aboard the aircraft carrier USS Theodore Roosevelt (CVN-71) on Aug. 17, 2014. US Navy Photo

The Navy’s unmanned X-47B lands aboard the aircraft carrier USS Theodore Roosevelt (CVN-71) on Aug. 17, 2014. US Navy Photo

The U.S. Navy’s move toward developing a carried-based unmanned combat aircraft might eventually afford the service the ability to strike targets at long-range, but there are ethical and legal questions that linger should the Pentagon develop a fully autonomous system.

As currently envisioned, the Navy’s Unmanned Carrier Launched Airborne Surveillance and Strike (UCLASS) will be autonomous, but it will have a “man-on-the-loop” according to Rear Adm. Mat Winter, the Naval Air Systems Command’s Program Executive Officer for Unmanned Aviation and Strike Weapons. But the UCLASS is not going to be a penetrating strike aircraft such as many senior defense officials, academics and analysts had hoped. In fact, serious legal and ethical dilemmas might arise if the Pentagon were to pursue an unmanned penetrating strike aircraft.

There are many senior Pentagon officials—including Deputy Defense Secretary Bob Work—who argue for a deep penetrating unmanned strike aircraft that would launch from a carrier if the United States were serious about its “pivot” to the Pacific. But potential adversaries, such as Russia and China, are not stupid, and are certain to attack the vulnerable data-links that control such an unmanned bomber via electronic and cyber attacks.

One recently retired Navy official acknowledged that giving such a warplane full autonomy—to include launching weapons without prior human consent—might be the only effective way for a long-range unmanned strike aircraft to operate in a theater where the United States faces off against a near-peer potential adversary, but the prospect of such a system raises legal and moral questions.

Anti-Access Area Denial

The People's Liberation Army's DF-21D medium range ballistic missile, the so-called 'carrier killer.'

The People’s Liberation Army’s DF-21D medium range ballistic missile, the so-called ‘carrier killer.’

In the Western Pacific, China is building up its anti-access/area-denial (A2/AD) capabilities—including communications jamming, cyber-warfare and anti-satellite weapons. In the event of a conflict, Chinese forces are likely to attack those vital communications links than enable U.S. forces to operate cohesively. In those communications degraded/communications denied environments, unless a system is manned, autonomy might be the only way to go.

For the Navy there is an added dimension, as was postulated by Jan van Tol, Mark Gunzinger, Andrew Krepinevich and Jim Thomas at the Center for Strategic and Budgetary Assessments: the service’s aircraft carriers no longer have a haven in coastal waters 200 nautical miles offshore. With the rising threats to the aircraft carrier in the form of antiship cruise and ballistic missiles, those ships may be forced to stand off a significant distance—more than 1,000 nautical miles—from the enemy shoreline.

Additionally, with the proliferation of advanced integrated air-defense networks and low-frequency radars that can detect and track low-observable targets, existing stealth aircraft may not have the range or the survivability to operate in those theaters.

In that case, the best option for the Navy might be to develop a long-range unmanned strike aircraft with wide-band all-aspect stealth technology that would be able to persist inside even the densest of enemy air defenses. By necessity, given that such an advanced adversary would be able to deny or degrade communications significantly, such an aircraft would have to be fully autonomous. In other words, the unmanned aircraft would have to be able to operate independently of prolonged communications with its human masters and it would also need to be able to make the decision to release weapons without phoning home for a human operator’s consent.

Moreover, the U.S. Air Force also faces basing challenges in the Western Pacific, as existing air bases such as Kadena and Misawa in Japan and Andersen Air Force Base in Guam are vulnerable to concerted air and missile attacks. A very stealthy long-range autonomous unmanned strike aircraft could be used to complement the service’s prospective Long Range Strike Bomber—going into places that are far too dangerous for a manned aircraft or to perform missions like stand-in jamming from inside hostile territory.

The Autonomous Cost Equation

Contractors with Northrop Grumman responsible for operating the X-47B on the deck of USS Theodore Roosevelt (CVN-71) on Aug. 17, 2014. US Naval Institute Photo

Contractors with Northrop Grumman responsible for operating the X-47B on the deck of USS Theodore Roosevelt (CVN-71) on Aug. 17, 2014. US Naval Institute Photo

While the initial cost of developing such an autonomous unmanned aircraft might be high, there might be significant savings longer-term. An autonomous unmanned aircraft only needs to be flown occasionally during peacetime to keep-up the proficiency of maintainers. Further, an autonomous aircraft has no need to fly training sorties or to practice—a computer can simply be programmed to do what needs to be done.

Additionally, such an autonomous unmanned aircraft would not need downtime between deployments—just the occasional depot-level maintenance overhaul. That means that the Navy—or the Air Force, if it bought some—would need only as many aircraft as required to fill the number of deployed carriers and account for attrition reserves and planes laid up in depot maintenance. There could also be significant personnel cost savings because a fully autonomous aircraft would not require pilots and the smaller fleet would require fewer maintainers.

The technology to develop and build such an aircraft mostly already exists. Most current unmanned aircraft like the General Atomics Aeronautical Systems MQ-1 Predator and MQ-9 Reaper are remotely controlled by a human operator. Others—like the Northrop Grumman MQ-4C Triton or RQ-4B Global Hawk—have far more autonomy but are not armed. Nonetheless, there are already a number of autonomous weapon systems that are either in service or that have reached the prototype stage that can engage hostile targets without human intervention.

Perhaps the two most obvious examples are cruise missiles and intercontinental ballistic missiles. Once those weapons are launched, they proceed autonomously to their preprogrammed targets without any human intervention.

If one were to imagine a U.S. Navy destroyer launching a Tomahawk cruise missile at a fixed target somewhere in the Western Pacific, there is a sequence of events that would be followed. The crew of the destroyer would receive orders to attack a particular target. The crew would then program that information into the missile. Once launched, the Tomahawk navigates its way to the target in a manner similar to a manned aircraft, but completely without human intervention.

Against a fixed target, for example a bunker or factory, a fully autonomous unmanned air vehicle would be very similar to a cruise missile. Like a Tomahawk cruise missile, the unmanned aerial vehicle (UAV) would receive a particular target location and instructions for how to engage that target with the correct weapons. Like the Tomahawk, the UAV would be able to navigate to that target completely autonomously. If the UAV were then to engage that fixed-target with a Joint Direct Attack Munition (JDAM) or some other weapon, in practical terms, there is no real difference between an unmanned aircraft and a cruise missile. The effect would be identical. The only change would be that the UAV could make a second pass, fly onto another target, or fly home to be rearmed. And it could be argued with its jet engine and wings, a Tomahawk is really just a small UAV on a one-way trip.

Expecting the Unexpected

Chinese anti-air missile system.

Chinese anti-air missile system.

The more challenging scenario comes when there is an unexpected “pop-up” threat such an S-400 surface-to-air missile battery that might be encountered by an autonomous unmanned combat air vehicle (UCAV) during a wartime sortie. Human pilots are assumed to inherently have the judgment to decide whether or not to engage such a threat. But those human pilots are making their decisions based on sensor information that is being processed by the aircraft’s computer. In fact, the pilot is often entirely dependent upon the aircraft’s sensors and the avionics to perform a combat identification of a contact.

The Lockheed Martin F-22 Raptor and F-35 Joint Strike Fighter epitomize this concept—especially in the realm of beyond visual range air-to-air combat. Both the Raptor and the F-35 fuse data correlated from the aircraft’s radar, electronic support measures and other sensors into a track file that the computer identifies as hostile, friendly or an unknown. The pilot is entirely reliant upon the computer to determine a proper combat identification; it would be a very small technological step for the system to engage targets autonomously without human intervention.

The air-to-ground arena is somewhat more challenging due to target location errors that are inherent in sensors and navigation systems (and also environmental effects and enemy camouflage). But with a combination of electro-optical/infrared cameras, synthetic aperture radar, ground moving target indication radar or even hyperspectral sensors, a computer can ascertain a positive combat identification of ground targets—assuming that the data being gathered is geo-registered. Once the computer can determine a positive identification—either a manned or unmanned aircraft—can engage a target. But at the end of the day, the computer is still making the determination that a contact is hostile.

In fact autonomous systems capable of identifying and attacking targets at their own discretion have existed in the past. One example is the Northrop AGM-136 Tacit Rainbow anti-radiation cruise missile, which was canceled in 1991. It was designed to be pre-programmed for a designated target area, over which it would loiter. It would remain in that designated box until it detected emissions from hostile radar. Once the Tacit Rainbow detected and identified an enemy emitter, the missile would zero in for the kill—all without any human intervention.

A later example is the Lockheed Martin Low-Cost Autonomous Attack System. The now-defunct miniature loitering cruise missile demonstrator was guided by GPS/INS to a target box. It would then use laser radar to illuminate targets and match them with pre-loaded signatures. The weapon would then go after the highest priority target while at the same time selecting the appropriate mode for the warhead to best engage the target autonomously without human intervention.

Other prominent examples include the Aegis combat system, which in its full automatic mode can engage multiple aircraft or missiles simultaneously without human intervention. Similarly, the shipboard Close-in Weapons System or Phalanx has an autonomous engagement capability.

Moral And Legal Questions

X-47B conducts its first night flight April 10 over Naval Air Station Patuxent River, Md. on April, 10 2014. US Navy Photo

X-47B conducts its first night flight April 10 over Naval Air Station Patuxent River, Md. on April, 10 2014. US Navy Photo

What all of that means is that fully autonomous combat identification and engagement is technically feasible for unmanned aircraft—given sophisticated avionics and smart precision guided weapons. But while technically fully autonomous unmanned combat aircraft are feasible, what of the moral and legal implications?

The Pentagon has already preemptively issued policy guidance on the development and operational use of autonomous and semi-autonomous weapons in November 2012. DOD directive 3000.09 states: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” But the policy does not expressly forbid the development of a fully autonomous lethal weapon systems, it merely states that senior Pentagon leadership would closely supervise any such development.

In order to prevent what the Defense Department calls an “unintended engagement”, those who authorize or direct the operation of autonomous and semi-autonomous weapon systems are required use “appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules and applicable rules of engagement,” the policy states.

Thus it would seem that the U.S. government views the use of autonomous weapon systems as legal under the laws of war–provided certain conditions are met. Indeed, a number of lawyers specializing in national security law have suggested that fully autonomous weapons are lawful. The responsibility for the use of such weapon would ultimately fall to the person who authorized its employment—which is similar to any other manned weapon.

But there are those who are adamantly opposed to any fully autonomous weapon systems—organizations such as Human Rights Watch (HRW). In a November 2012 report titled “Losing Humanity: The Case against Killer Robots,” HRW called for an international treaty that would preemptively ban all autonomous weapons. In fact it is likely that the DOD policy guidance on the development of autonomous weapons stems from the conclusions of the HRW report.

The HRW report makes three recommendations. The first: “Prohibit the development, production and use of fully autonomous weapons through an international legally binding instrument. The second: “Adopt national laws and policies to prohibit the development, production and use of fully autonomous weapons.” The third: “Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.”

HRW asserts that autonomous systems are unable to meet the standards set forth under international humanitarian law. “The rules of distinction, proportionality and military necessity are especially important tools for protecting civilians from the effects of war, and fully autonomous weapons would not be able to abide by those rules,” the report states.

But critics, such as legal scholar Benjamin Wittes at the Brookings Institution. have challenged such statements. Wittes has written that there are situations where machines can “distinguish military targets far better and more accurately than humans can.” Indeed, those familiar with unmanned technology, sensor hardware and software can attest that is indeed the case.

If a computer is given a certain set of parameters—for example a series of rules of engagement—it will follow those instructions precisely. If the autonomous weapon is designed and built to operate within the laws of war, then there should be no objection to their use. Under Article 36 of the 1977 Additional Protocol to the Geneva Conventions, weapons cannot be inherently indiscriminate and are prohibited from causing unnecessary suffering or superfluous injury. “The fact that an autonomous weapon system selects the target or undertakes the attack does not violate the rule,” Hoover Institution legal scholars Kenneth Anderson and Matthew Waxman wrote in a their paper “Law and Ethics For Autonomous Weapon Systems.”

Technology is continuously moving forward and while autonomous systems may not be able to operate under all circumstances, it may only be a matter of time before engineers find a technical solution. While under many circumstances—with the right sensors and algorithms—autonomous systems would have the ability to distinguish lawful targets from unlawful targets, but that is not currently the case under all circumstances. Thus there are some limitations inherent to autonomous weapon systems for the time being.

However, those limitations will not always be there as technology continues its march forward and engineers continue to make progress. As Wittes correctly points out, “To call for a per se ban on autonomous weapons is to insist as a matter of IHL [international humanitarian law] on preserving a minimum level of human error in targeting.” Machines are generally far more precise than human beings.

Proportional Actions

B-52 bomber. US Air Force Photo

B-52 bomber. US Air Force Photo

Along with being able to distinguish between targets, the law requires that combatants weigh the proportionality of their actions. “Any use of a weapon must also involve evaluation that sets the anticipated military advantage to be gained against the anticipated civilian harm,” Anderson and Waxman write. “The harm to civilians must not be excessive relative to the expected military gain.”

Though technically challenging, a completely autonomous weapon system would have to be required to address proportionality as well as distinction. But the difficulty is entirely dependent upon the specific operational scenario. For example, while an unmanned aircraft could identify and attack a hostile surface-to-air missile system deep behind enemy lines or an enemy warship at sea—where there is little chance of encountering civilians—targets inside a highly populated area are more difficult to prosecute.

Some of the most difficult scenarios—which would not necessarily be a factor in a high-end campaign against an A2/AD threats–would be challenging for a human pilot, let alone a machine. For example during counter-insurgency campaign, if there were two school buses driving side-by-side in a built-up area, but one of the vehicles was carrying nuns and the other carrying heavily-armed terrorists, it would be very difficult for a human pilot to determine which bus is the proper target until one of them commits a hostile act. The same would be true for an autonomous system—but in the near-term, it could be a technological challenge.

The human pilot would also have to determine what kind of weapon to use—judging the proportionality. Does he or she select a 2.000-pound JDAM or a smaller 250-pound small diameter bomb, or 20mm cannon, or do nothing since the risk of civilian casualties is too high? Likewise, an autonomous weapon system would need to be programmed to select an appropriate low collateral damage munition or to disengage if the danger of civilian casualties were too great once the target has been positively identified. But it would take time and investment before such an autonomous system could become a reality.

Baby Steps

Thus, for the near future, autonomous weapons would have to be developed incrementally starting with systems that could engage fixed targets and “obviously” military targets like surface-to-air missile sites or tank columns on the open battlefield during a conventional war. Likewise, in the maritime environment, where there are few civilians to speak of, autonomous systems could offer huge advantages with little in the way of any drawbacks.

Additionally, for the time being, autonomous weapons should not be utilized in complex scenarios—such as counter-insurgency–where there is significant possibility that it could cause inadvertent civilian casualties or unintended collateral damage. It may also be unwise to use a fully autonomous UCAV for missions like close air support—particularly during “danger close” type situations where friendly troops are in close contact with the enemy—until the technology has been proven operationally in other roles. Human pilots have a hard enough time with those types of missions.

While at present there are some technological limitations that do exist, those are not likely to remain to roadblocks forever. Autonomous technology is advancing rapidly and could one day be precise and reliable enough to not only distinguish correct targets but could also make proportionality judgments in complex scenarios based on parameters programmed into the machine. Those parameters would not be unlike rules of engagement given to human operators. Already, cameras and software exist that can identify individual human faces for example. Once a target is precisely identified, it would not be a huge leap then for an autonomous system to use a low-collateral damage weapons to eliminate hostile targets while minimizing any harm to civilians.

Much of the objection to fully autonomous weapons seems to stem from a sense of “fair play” rather than any real legal issues—most of which are likely to be overcome. But any time new weapons technology emerges, there is opposition from those who believe that the technology fundamentally unbalances war. Objections have been raised throughout history to new technologies—ranging from crossbows and longbows, to machine-guns and submarines—because the use of such weapons was considered to the “unfair” or “unsporting.” But ultimately, the use of such weapons became a fact of life. War is not a game, and as U.S. Air Force Col. Lawrence Spinetta, commander of the 69th Reconnaissance Group said: “Isn’t there a moral imperative on the part of a nation to minimize danger for its soldiers and airmen?”

Indeed there is no legal requirement for war to be fair—in fact throughout history war has been anything but. “The law, to be sure, makes no requirement that sides limit themselves to the weapons available to the other side; weapons superiority is perfectly lawful and indeed assumed as part of military necessity,” write Anderson and Waxman.

  • Bob Walters

    Anything “autonomous” can eventually be hacked.

    • Pat Patterson

      What are your parameters for eventually? The enemy would have to know system information, etc., beforehand. In a large shooting war who is going to have the time to make rapid adjustments? Burst transmissions with pencil beams, frequency hopping and other methods could be used to limit hacking.
      If I’m a drone flying across West Texas how does the enemy know where I’m eventually heading or even what I’m supposed to attack if I emit no electronic signature?

      • Bob Walters

        Never underestimate what another country can do, witness the hacking China has done. Also never overestimate the stability of software. I am a retired Sr. SQA Engineer if I had a nickel for every times a programmer told me I would not find a defect I be a very wealthy man.

    • Ted

      Weapons can be captured, or will break. We still use machine guns to fight our wars. Autonomous weapons are coming we have to gear ourselves now to understand what that means for the military.

      • Bob Walters

        I think autonomous weapons are really bad idea for a who bunch of reasons.

        • Mike Paul

          Frightening, isnt it?

  • Operator

    Excellent comment. Agree completely.

    The article fails to capture the realities of the battlefield because the author has never experienced them. To any readers who are not “on the inside”, this article is an almost laughably awful analysis.

  • Ops

    USNI must be getting paid some big bucks, or have some VERY dedicated ideological interests to keep pushing embarrassingly misinformed garbage like this out there.

  • Alfredo C. Magdalera

    UCLASS is one chip on our shoulders.

  • publius_maximus_III

    Seems like the reason for wanting an autonomous aircraft is the liklihood of enemy jamming of communications to and from a controller at a remote location for a non-autonomus aircraft. What if instead of a steady stream of data being sent back and forth, the communication was condensed down to a single one-way packet? It would contain sufficient information about the proposed target, perhaps a moving enemy ship or portable SAM site, that the aircraft has used to make it’s “kill” decision, but awaits one final human confirmation at headquarters before proceeding. The return confirmation could be a very brief coded “go” message at a new frequency calculated from the original frequency. Both would be difficult to jam due to brevity of transmission time and change in frequency. It would be a “trust but verify” semi-autonomus system.

    • Ted

      The arguement today is whether we trust a robot to decide on its own. Twenty years (ten is possible) from now IA computers will be to the point where we will trust the computers more than man. Then the arguement will be that we should not have a man in the loop because we trust the computers so much more.

      • publius_maximus_III

        So, in the Brave New World of the future, a Predator may taxi up to its wing commander’s office and “pointedly” ask for a promotion…

      • Mike Paul

        Have you seen the state of industry today. Automakers and many other industries have a handful of robots assembling their products that would take hundreds of people to produce. I would call that trust.

  • BradMueller

    War, by its definition, is neither legal or ethical. Wars are only won or lost. So anything that can facilitate winning a war is an asset.

  • James Hasik

    Dramatically? Really? I don’t think that Dave is advocating depending “on software alone”. The essay raises lots of issues, and offers ideas, but doesn’t pretend that this case is closed.

    • VF84RIO

      I agree the case is not closed. Progress on technology will continue. But the technocrats would have you believe we can build these autonomous systems today and the cost of war will dramatically reduce. I don’t think we want the cost of war to be reduced. It’s also public record that the USAF Chief Technology Officer says (paraphrasing) “We don’t know how to test complex autonomous systems and we don’t trust them yet”. For the USN, I think the best use of UCLASS is surveillance and tanking – its a huge waste to have F-18s tanking and drilling around identifying ships in the area.

  • BMont

    I do think he somewhat exaggerates the potential cost savings. He states that an autonomous UAV only needs occasional peacetime flights to keep maintainers proficient, and “has no need to fly training sorties or to practice”. In addition to maintainers, operators will have to practice programming, launching and recovering the UAV. And there will always be a need to do rehearsals or tests to ensure the computer performs as programmed and the weapons system operates as expected; we still test cruise missiles, and fire (and recover) torpedoes before they go to the fleet. There will certainly be savings, perhaps dramatic, but those costs will not go away entirely.

  • http://orthoman.com/ DockyWocky

    Just when things were looking up when it came to relatively risk free killing of icky folks like ISIL savages, along comes the ethicists decrying the use of autonomous Strike Aircraft.

  • Ted

    It is not that we need to rely on software alone but we need to understand that these weapons will be a part of future warfare. We don’t rely on missiles alone but they are a tool that is used regularly in war.
    One thing that was not touched on is the growth of IA. In the next 20 years we will have IA computers that are smarter than every human on Earth combined. There will not be a staff out there that will plan an exercise or operation without the guidance of a computer. You are right that technology doesn’t win wars. But you need to understand that 20 years from now autonomous robots will be in every battalion sized unit in the DOD.

    • Bob

      “But you need to understand that 20 years from now autonomous robots will be in every battalion sized unit in the DOD.”

      What are you smoking, dude?

  • http://www.kcharlesbadoian/ Ken Badoian

    Moral and legal…do you think any potential enemy would be moral and legal in a war with us. Too many lawyers and I’ll forgo any lawyer jokes. Any morality in war is a joke. But humans will continue to wage wars for whatever purpose. The only point we should be worried about is that we are the ones that can dictate the moral and legal outcomes, that is be the winners or it’s all for naught.

    • publius_maximus_III

      It’s 99% of all lawyers that give the rest a bad name.

  • Chesapeakeguy

    I cringe when I read about ‘laws of war’. It seems the only ones held to such ‘standards’ are us, our our side. The enemies we fight are never held accountable for their actions. In war, there is no ‘high road’ to apply, you either win or lose, or on a more personal level, you live or die. As to legal considerations, does it matter what is used to take out a potential objective? Whether manned or unmanned, if a platform takes out civilians, the entity that used them (nation, alliance,etc.) will be blamed.

    All that said, you use what you have. if reliance is being placed on unmanned vehicles, so be it. The same mistakes in targeting have happened to both (manned vice unmanned). That will apply to unmanned sea-craft, unmanned land vehicles, and the eventual mechanical ‘foot soldier’ that is inevitable. Among the rationale for using unmanned craft is that they tend to be cheaper and safer, in that the elements needed for human interoperability are not required, and (our) humans are not exposed to harm in the way of death, injury, or capture.

    As for the present, it makes sense that these platforms, if used in an attacking mode, only attack fixed, known targets, if they are programmed to act autonomously. If unmanned craft are to be used against more mobile targets, or in a changing situation, then the ability to control them and re-direct them must be maintained and assured. Duh! I suspect that’s what the case is now. I have confidence that our military and other services that use them (CIA, etc.) will get it right, and for the most part have had it right since the get-go.

    • Secundius

      @ Chesapeakeguy.

      The only “Law of War” Guide, that I’m familiar with state the following.

      In war, there only two-rules follow:

      > RULE NUMBER ONE: In war innocent people die.
      > RULE NUMBER TWO: You can’t change, Rule Number ONE.

      • Chesapeakeguy

        I read you Secundius. Well stated!

        • Mike Paul

          I believe my rule book says the same thing. Pity more people don’t know the simple rules of a civil society.

  • Chris Peters

    I got a lot to say about this subject, but for starters, the parameters of engaging a “near peer” state are not target destruction, but dollar cost averaging on enemy resources to continue fighting towards a favorable cease fire negotiation. Under economic constraints of assigning value to enemy targets, and realizing that passive attrition of enemy economic strength to start, fight, or retaliate, at an adversary opens up a wide range of possibilities that have been under wide spread use for quite some time now. Full market adoption of consumer electronics that receive broadcasted messages from dubiously owned corporate assets gridlock civilian economic activity into enemy signals intelligence right here in our own boarders. European and japanese automobile manufacturers, alongside corporate communication interests, flood american markets with car badges and model numbers, computer chip sets, portable music devices that are less than friendly encrypted messages. These foreign manufacturers located outside of our hemisphere, who have operated at widening losses for quite some time now, (the most obvious that comes to mind is the chinese yuan) are working in loose formation to apply military pressure on economic strength to prepare for, and win, any forthcoming war. Complicating the matter even further, are ground operations that can be erected into free market zones to interface with low earth orbiting or geostationary space platforms, that can all be procured and launched from over the horizon. So the discussion of UAV’s engaging enemy targets on heat signature, or any other technical parameter, call into a question a host of problems that are already being fully leveraged against Americans before I was even born. Even the most viscious of lawyers, shy away from laws that would group in civilian technology with military law becuase it creates a lot of questions about economic class warfare that go back to the european middle ages, where feudal lords would arrange wars between each other as an exit strategy from financial overreach. for any laws about UAVs to be legally binding of a technical nature, you would have to specifically address bandwidths, you would have to articulate the time dilation of overlapping space navigation and detection platforms on ground activity, and that would bring to light all the legacy costs of civilian black operations and where all that money went to.

  • CPTCHUCK

    We in the west are attempting to sanitize war to the point that is will be send in the clowns and let them sort the problem out. The best observation tool is the MK 1 eyeball, the best decision maker to shoot or not to shoot is a human. Even a RC Drone is much preferable to so autonomous air vehicle that goes to a an area and drops a bomb on Point X but the bad guys have move to point y. I am a ground pounder, and as such understand that air power can not win a war, it can make winning easer and keep you from losing but it can not defeat the enemy solely by it self. The duration of a bombs effect on the ground is about 5 minuets. The duration of effect of an infantryman on the ground is unlimited.

  • anyfacetango

    I completely agree with VF84RIO; it´s not just a question of placing sensors in the air, but of having the means to EVALUATE whatever information any sensor can obtain. As far as I´m concerned, evaluation in the one step in the tactical cycle which can only be carried out by a human being. Further more, an on-the-spot tactical observer (whether it be a pilot, RIO, Tactical Director, etc) is critical, as it can adapt to changing tactical situations far more than a UAV can do regradless of how good it´s programmed.
    UAS´s in general are a perfect complement to other assets, but can under no circumstances completely exclude the watchful criteria of the traditional aviator: it´s simply impossible to take into account every single tactical situation and program a UAS accordingly. Something else is needed.

  • RoadRunner

    Glad someone is addressing the issues here. However, no one has answered the question that keeps bothering me regarding our current policy of using armed drones in the anti-terrorism campaign. “What will our reaction be when similarly armed drones start attacking American targets right here at home?” We have the lead on the technology for now, but you know how that goes – it won’t take long to proliferate, and to the worst places/people.