see added word document for detailed instructions and the structure i need to be

see added word document for detailed instructions and the structure i need to be made into a cohesive literature review. 
I need a Literature review of 10 pages concerning the following research question: 
Title: Optimizing Hybrid Decision-Making Models in AI-Integrated Weapon Systems: Balancing Human Control, Ethical Oversight, and Efficiency through AI Autonomy
Research Question: 
To what extent can a hybrid decision-making model effectively reconcile human expertise and AI capabilities within the functionality of lethal autonomous weapon systems?
Important Note: Please use the points mentioned in my structure and rewrite them into a cohesive literature review.
Aim of Research: 
Human oversight and control taking away from full AI-guided-autonomy within weapon systems takes for granted that Humans are more capable of final decision making when it comes to the “critical function” of lethal autonomous weapon systems. In order to get to the bleeding edge of our understanding of AI uses in weapons (and the things we can do about it, hence optimized policies and regulations), I want to focus my research on criticizing the current debate and its inability to contribute meaningfully to the reality of AI roles in weapons development and use.
This research should aim to explore the strategic alignment of policy frameworks and regulatory approaches with the evolving capabilities and limitations of AI in weapon systems. It will investigate the development of a hybrid decision-making model that maximizes the strengths of human expertise and AI capabilities while addressing critical challenges and benefits associated with integrating human oversight with AI-driven autonomous functions. The focus will be on ensuring responsible use, ethical decision-making, and maintaining a balance between human control and AI autonomy within automated weapon systems. I will use literature to review the evolution of AI in weapon systems, its capabilities, limitations,  and ethical concerns. I plan on expanding my set of literature available to me by especially looking at studies conducted within Military Universities and within the frames of AI regulatory anticipations by public institutions. As I want to focus my research on criticizing the current debate and its inability to contribute meaningfully to the reality of AI roles in weapons development and use, I will have to gain a full understanding of the current developments of the debates and consensus displayed in scholarly research findings as well as reflected within political regulations. Hence, I will have to review and analyze both sides to the argument as much as those who will benefit and or suffer under the realization of autonomous weapon systems.
What the Literature Review should include: 
•            Critical review of existing literature related to AI in weapon systems, policy frameworks, and ethical considerations.
•            Identification of gaps in the current research.
Structure: 
Background, Evolution and current discussion about LAWS/What is the current debate about LAWS?
⁃             Definition and Differentiation between Weapon Autonomy and Automaticity: What is the status quo in use right now? 
⁃             Look into the functional character of weapon autonomy and examine which of its aspects are actually new. 
⁃             Address Weapon autonomy as a functionality
⁃             Interim conclusion: Autonomy in weapons is not categorically distinctive
⁃             Technological status of LAWS and their areas of application
⁃             What does AI mean to the military sector: 
⁃             To what extent has our understanding of security changed due to AI, 
⁃             Why is there a need to invest in AI deployment: the changing image of war.
⁃             What are the prerequisites for the successful and effective use of AI in the military sector? 
⁃             The future of military AI: In what ways can AI be applied in the military sector? 
⁃             Examples of AI-Military-Strategies 
⁃             AI in lethal autonomous weapon systems (LAWS)
⁃             Definition of LAWS
⁃             Grades of Autonomy
⁃             Current status of the development of LAWS
⁃             Current debates and positions of various stakeholders (military, politics, civil society: Ethical, legal and security concerns: Capabilities and Limitations of LAWS
⁃             What is the problem with AI -> The problem with AI is what it cannot do: The problem is the manipulability of AI and the lack of controllability. Bill Gates said in an interview (source: Handelsblatt) that we know that AI works, but we still have to research why it works – you can’t make security issues dependent on that.
⁃             Are there any existing Political Demands (EU/US/?)
⁃             Analysis of existing policy frameworks and regulatory approaches in AI-integrated weapon systems
⁃             Interim Conclusion: the prospects of gaining the upper hand by allowing for the completion of the targeting cycle at machine speed is, despite the accompanying ethical and legal misgivings, arguably the most important factor propelling current efforts to make weapons autonomous and remove human control entirely 
Critical consideration of LAWS
⁃             AI use in the military context also means possibilities for misuse: What are the dangers of AI? 
•            Danger of conscious/unconscious manipulation and hacking
•            New dimension of global destabilisation
•            Danger of proliferation
•            Use of insecure systems
•            E.g. facilitated access for small states or terrorist organisations to weapons of mass destruction or to massively increase the effectiveness of conventional weapons through AI control.
•            E.g. Linking AI to decisions on the use of weapons systems, because false alarms or limited attacks without human intervention can lead to escalations
•            —> AI as a Security Risk —> Ethical and legal questions of principle
•            Ethical Question: Can the decision about the life and death of a human being be left to a machine?—> especially imminent in LAWS —> 
•            Legal Question: Are LAWS compatible with international humanitarian law?
⁃             Interim Conclusion: —> AI use in military context means that any problems with AI mean immediate and severe security risks 
⁃             Analysing the reasons why certain decisions are currently made by humans: Explain Principle of „meaningful human control“
⁃             Analysing errors and limitations of both humans and AI in decision-making processes 
•            Human error vs. AI errors / Adversarial Examples 
•            Why do we trust humans to make final decisions when it comes to the “critical function” of LAWS, hence selecting and engaging the target, more than we trust technology? Are there studies when it comes to Human error vs. AI errors (Adversarial Examples)? When, for example, weapon automation (I don’t think there is much difference between autonomy and automation) is used to increase a weapon systems’ precision-strike capability which in turn allows the system to be used in a manner that discriminates better between legitimate military targets and civilians. Due to the vulnerability of ML / adversarial examples / the inability of AI to handle uncertainity, so far no democratic state abiding by IHL would risk to just let a Military-AI make the “firing decision”. It would, however, let a “trained” human being make this decision. Human make errors to due to contextual impetus or influences. Influences that an AI wouldn’t suffer from as much as the AI “suffers” from decision-making-errors that a human wouldn’t make. Why do we deem one risk of error less harmful than the other? Even if I am looking at the use of AI in LAWS as a simple “race to arms”, then yes, of course, developing proper weapon autonomy is generating a key tactical advantage over any adversarial system that is controlled remotely and thus necessarily slower in comparison to the completion of the targeting cycle at machine speed; But, any debate about what the potential of weapon autonomy could possibly mean and how that could/should be regulated will remain exactly that: a hypothetical debate that will not actually happen as soon as no technical solution is found for adversarial examples, right? The fact that very expensive AI-enabled weaponry could potentially be tricked quite easily and cheaply, a fact derivable from knowing the capabilities and limits of the underlying technology, is rarely discussed—because it runs counter to the dominant military AI narrative. 
Human decisions vs. AI decisions
⁃             Analysing the strengths and weaknesses of human decision-making in conflict situations
⁃             Role of intuition, experience and moral judgement
⁃             Comparison of Human and AI capabilities and limitations, especially in the context of information processing and decision-making
My take on the literature review so far: 
The idiosyncrasies and limitations of AI suggest that military AI could potentially fall short of
fulfilling numerous pledges outlined in concepts such as “Fighting at Machine Speed”.1,2 AI has been used within Weapon Systems for years. Hence, any question concerning the regulation of introducing AI into using for military / weapon systems is outdated. Instead, the main question still to be answered is: if and how weapon autonomy can be ethically produced and used for cases where human control is entirely removed. While the core issue of meaningful human control is already controversial, there is no question that the unrestricted use of artificial intelligence in some important functions of weapon systems could undermine compliance with the basic principles of international humanitarian law, such as differentiation, proportionality, or necessity.3,4,5
Opponents of autonomous machines argue that such devices would lower the inhibition threshold for violence.6 However, before passing judgement, we should understand the moral and legal aspects that underpin the operation of autonomous weapons systems. All weapons are subject to the principles of international humanitarian law (IHL). Traditionally, it is human commanders who implement the international rules of military conflict – and can be brought before the International Criminal Court in The Hague if they fail to do so. According to current scholarly and political debate it would be a fallacy to assume that autonomous weapons can one day be programmed so precisely that they can distinguish between legitimate and illegitimate targets much more accurately than humans.7 This research will try and challenge these assumptions to contribute to a less emotions-based and more objective debate. This research will argue that the debate surrounding AI in weapon systems is incomplete and should instead include arguments based on scientific research concerning the limitation of AI as well as human errors/limitations, especially in times where adversaries might already be using autonomous weapon systems. The aim of this Thesis isn’t to conclusively resolve the debate about the regulation of AI in autonomous weapons systems. Instead, it seeks to reignite this discussion by introducing technical intricacies that challenge the common perspective of dismissing it due to complexity with the goal of fostering a deeper understanding.8 If we survey and understand failures of human decision making in warfare and as much as we concurrently survey and explain failure modes of machine learning that impact proper functioning of larger systems (in warfare and anywhere else), we might be able to develop an optimized hybrid decision making model to be applied within AI Weapon Systems, combining both: Ethical
Values embedded within IHL Optimizing Policy as well as efficient warfare mechanisms.
In the end, the debate led strongly by Objectors to use AI for autonomous decision making, fails to understand and include why we trust humans to make final decisions when it comes to the“critical function” of LAWS, hence selecting and engaging the target, more than we trust technology that might have the potential to be without bias or contextual influence.
Even if we cannot find a “solution” to prevent faulty AI-decision-making due to adversarial
examples or the potential of exertion of influence by adversaries, regulatory policy frameworks must not diminish the beneficiary potential of AI in weapon systems but must remain open to the evolving capabilities of AI, ensuring responsible use and accountability.9
Sources used so far: 
1 Kulkarni, John Brennan, Adarsh. “Fighting for Seconds: Warfare at the Speed of Artificial Intelligence.” Modern War
Institute, 2 Nov. 2023, mwi.westpoint.edu/fighting-for-seconds-warfare-at-the-speed-of-artificial-intelligence/.
2 Sauer, F. (2022). The Military Rationale for AI. In: Reinhold, T., Schörnig, N. (eds) Armament, Arms Control
and Artificial Intelligence. Studies in Peace and Security. Springer, Cham. https://doi.org/10.1007/978-3-031-
11043-6_3
3 T.F. Blauth, Autonomous Weapons Systems in Warfare: Is Meaningful Human Control Enough?, in A. Zwitter – O.
Gstrein (eds), Handbook on the Politics and Governance of Big Data and Artificial Intelligence, Cheltenham, 2023, p.
489 ff.
4 Sauer, Frank 2018: Künstliche Intelligenz in den Streitkräften: Zum Handlungsbedarf bei Autonomie in
Waffensystemen. Bundesakademie für Sicherheitspolitik (BAKS) Arbeitspapier Sicherheitspolitik, Nr. 26/2018.
[Translated into English and reprinted as: Artificial Intelligence in the Armed Forces: On the need for regulation
regarding autonomy in weapon systems].
5 Zurek, Tomasz & Mohajeriparizi, Mostafa & Kwik, Jonathan & Engers, Tom. (2022). Can a Military Autonomous
Device Follow International Humanitarian Law?. 10.3233/FAIA220479.
6 Crootof R. The Killer Robots are here: Legal and Policy Implications. Cardozo Law Review. 2015;36:1837-915.
7 Amoroso, Daniele et al. “Autonomy in Weapon Systems: The Military Application of Artificial Intelligence as a
Litmus Test for Germany’s New Foreign and Security Policy.” (2018). p. 28-31
8 Altmann, Jürgen. “Autonomous Weapon Systems – Dangers and Need for an International Prohibition.” Deutsche
Jahrestagung für Künstliche Intelligenz (2019).
9 Zurek, Tomasz & Mohajeriparizi, Mostafa & Kwik, Jonathan & Engers, Tom. (2022). Can a Military Autonomous
Device Follow International Humanitarian Law?. 10.3233/FAIA220479.