LegalGELegalGE
AboutServicesSpecialistsOrganisationsBlogContact
...
Loading...
AboutServicesSpecialistsOrganisationsBlogContact
Loading...
LEGAL.GELEGAL.GE

Georgia’s legal marketplace.

Quick Links

  • About Us
  • Specialists
  • Services
  • Firms
  • Organisations
  • Training
  • Blog
  • Contact

Legal

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy

Contact

contact@legal.ge

+995 551 911 961

Tbilisi, Georgia

Specialist Directory

Criminal Law AttorneyCriminal Law LawyerCivil Law AttorneyCivil Law LawyerCorporate & Commercial Law AttorneyCorporate & Commercial Law LawyerLabor & Employment Law AttorneyLabor & Employment Law LawyerTax Law AttorneyTax Law LawyerDispute Resolution & Litigation AttorneyDispute Resolution & Litigation Lawyer

© 2026 Legal.ge. All rights reserved.

Made with in Georgia

Blog

A New Arbitrator or a Trojan Horse in International Arbitration

18 min··

A New Arbitrator or a Trojan Horse in International Arbitration

                                                                                                                                

Abstract 

This article analyzes the role of Artificial Intelligence (AI) in international arbitration and explores the potential legal perils lurking behind this technological innovation. Specifically, it examines the delegation of decision-making authority to AI, breaches of confidentiality and impartiality, and escalating cybersecurity risks. One of the core findings of this research highlights the "feedback loop" phenomenon, a flaw whereby AI, tethered exclusively to historical data, fails to forge new precedents, thereby threatening the dynamic evolution of law with stagnation. Ultimately, the article argues that AI must remain strictly a technical assistant to human arbitrators, operating within the boundaries of rigorous legal regulations.

 

Introduction 

"Mr. Arbitrator?... Madam Arbitrator? No, wait... Robot AIrbitrator? I know! - Esteemed Neutral Decision-Making Entity?!  Due process- help us, the world has completely lost its mind..."

One can easily imagine an old-school, essentialist counsel muttering these exact words under their breath during a high-stakes pleading. After all, it took them half a century just to get comfortable with evolving human pronouns, and now they are supposed to figure out the proper honorific for a machine in an arbitral dispute? As they stand at the podium, a cold sweat breaks out, and their mind flashes back to the year everything changed: 2024.

Narrative flourishes and comedic panic aside, April 2024 indeed marked a watershed moment in the history of international dispute resolution. The publication of the “Guidelines on the Use of Artificial Intelligence in Arbitration” by the Silicon Valley Arbitration and Mediation Center (SVAMC) effectively confirmed that the technological revolution had not just knocked on the doors of arbitration it had breached them.[1]

This paradigm shift naturally evokes a polarized response. On the one hand, the integration of legal technology promises to be a catalyst for progress. AI in arbitration offers the alluring prospect of unprecedented speed, drastic cost reduction, and an overall boost in the efficiency of arbitral tribunals.[2] Yet, on the other hand, lurking behind this seductive convenience are profound risks that could threaten the very core of the institution.

However, the purpose of this article is not to sound the alarm, but rather to dissect these hidden threats and ultimately, to put our anxious essentialist counsel’s mind at ease. By analyzing the true capabilities and limitations of AI, this paper will prove that there is no need for existential dread. Instead of a robotic takeover, AI is destined to remain a powerful, yet strictly controlled, assistant, preserving the irreplaceable human element in the pursuit of justice.[3]

The primary objective of this article is to scrutinize the hidden perils lurking behind the facade of technological efficiency, to determine whether this seemingly irresistible Artificial Intelligence is, in fact, a modern-day "Trojan Horse." These underlying threats include the heightened risk of arbitral award annulment, the erosion of confidentiality and impartiality, the dissolution of accountability, and the ultimate stagnation of legal development (the so-called "feedback loop").[4]

The Legal Landscape and Party Autonomy

To begin with, the current legal framework does not categorically prohibit the integration of new technologies into arbitral proceedings. The very essence of arbitration lies in its contractual nature; both the arbitral award and the procedural conduct depend entirely on the arbitration agreement forged between the parties.

Specifically, Article 19(1) of the UNCITRAL Model Law on International Commercial Arbitration explicitly enshrines party autonomy, stating that the parties are free to agree on the procedure to be followed.[5] Furthermore, under Article 19(2),[6] the arbitral tribunal is empowered to conduct the arbitration in such a manner as it considers appropriate, which includes the exclusive authority to determine the admissibility, relevance, materiality, and weight of any evidence. Similarly, Articles 19.1[7] and 19.2[8] of the Singapore International Arbitration Centre (SIAC) Rules echo this exact sentiment.

As evidenced by these foundational pillars of arbitral procedure, there are no inherent statutory or institutional constraints placed upon the tribunal regarding the adoption of procedural mechanisms, the gathering of factual evidence, or the interpretation of documents. This unequivocally proves that the proliferation of legal technology should not be paralyzed by a baseless fear of institutional prohibition.[9] Ultimately, the principle of party autonomy empowers the disputing parties themselves to define the exact scope and boundaries within which the tribunal may employ Artificial Intelligence.

The Illusion of Certainty: The 90% Dilemma

However, the real predicament arises when we pivot from theory to practice. When we rely on AI to build dispute resolution platforms, systems touted for achieving "correct" outcomes in approximately 90% of cases, a critical question emerges: how do we distinguish which specific decision falls into the reliable 90%, and which crashes into the erroneous 10%? [10]

This statistical gamble casts a massive shadow of doubt over the reliability of Artificial Intelligence. It fundamentally questions whether a machine is truly capable of digesting the intricate nuances of complex facts, applying human-like reasoning, and ultimately rendering a completely objective and flawless arbitral award.

 

The Intuitu Personae Principle and the Danger of the Algorithmic Ghostwriter

Consequently, one of the most glaring legal perils shadowing the use of AI is the unauthorized delegation of the arbitral mandate. Arbitration is intrinsically anchored in the principle of intuitu personae. Parties entrust their dispute to a specific, carefully chosen human being- not an algorithm, based on their unique professional pedigree, reputation, and personal judgment. Therefore, if an arbitrator offloads their fundamental duty of rendering the decision to an AI, they are effectively surrendering their jurisdictional authority to an unauthorized, non-human third party.[11]

A highly illustrative precedent in this context is the landmark case of Yukos Universal Ltd. v. The Russian Federation.[12] In this instance, the Hague District Court set aside the arbitral award on the controversial grounds that the drafting had been heavily outsourced to the tribunal’s secretary, thereby violating the cardinal rule that arbitrators must personally author their decisions. The logic here is inescapable: if a human assistant acting as a "ghostwriter" can trigger the annulment of a multi-billion-dollar award, an algorithmic ghostwriter will undoubtedly serve as a fatal ground for vacatur, as it blatantly disregards the parties' agreement on procedure. While AI can certainly expedite the procedural timeline, at the end of the day, an award entirely devoid of genuine human cognitive reasoning is destined to hit a brick wall at the enforcement stage.[13]

The Paralegal Dream vs. The Confidentiality Nightmare

This brings us to the tantalizing promise mentioned earlier: the "simplification" of proceedings. What exactly does this entail? We frequently hear prophecies that AI will soon render paralegals obsolete by taking over tedious administrative tasks, organizing massive document caches, compiling factual chronologies, translating voluminous exhibits, and drafting skeletal procedural orders.[14] At first glance, this sounds like a practitioner’s dream. After all, it is precisely these technical burdens that notoriously drag out the lifespan of a dispute, bottlenecks that lightning-fast AI operations could theoretically eliminate.

However, amidst this pursuit of speed, we must not suffer from institutional amnesia regarding one of commercial arbitration’s crown jewels: Confidentiality. This core principle imposes a strict obligation to shield sensitive, privileged data and trade secrets from the public domain and unauthorized third parties.[15] Feeding proprietary case files into a third-party AI engine poses an existential threat to this sanctuary.

Furthermore, the European Union’s General Data Protection Regulation (GDPR) is inextricably linked to AI governance, imposing draconian requirements on data processing and protection.[16] This demands our utmost attention, as there is well-founded anxiety within the legal community that ,despite its numerous operational benefits, the integration of generative AI platforms in international arbitration will inevitably act as a highly lucrative magnet for hackers and cyberattacks.

 

The Cyber Threat: Fragility in the Digital Supply Chain

The paramount importance of cybersecurity is a recurring motif in the majority of contemporary publications and guidelines on this subject. This urgency was acutely captured by an Arbitration Tech Toolbox article introducing the new CyberArb training program.[17] The piece is particularly gripping because it opens in media res, with an alleged message from a hacker threatening to permanently obliterate access to data stored on an arbitration practitioner’s laptop with a single, random keystroke. In this scenario, the practitioner is chillingly identified as the weakest link in the supply chain of arbitral service providers. This stark hypothetical demonstrates just how fragile and sensitive the digital ecosystem of arbitration truly is. It serves as a grim reminder of how easily this balance can be shattered in an instant, often due to sheer ignorance or professional lethargy when it comes to implementing robust security measures alongside new information technologies.

A glaring, real-world manifestation of this vulnerability is the precedent-setting case of Gela Mikadze et al. v. Ras Al Khaimah Investment Authority et al.[18] According to the factual matrix of this case, which arose from a commercial arbitration conducted under the auspices of the Stockholm Chamber of Commerce (SCC), one of the parties filed a motion in the Swedish courts seeking the annulment of the award. The core argument was a severe violation of due process, claiming that a third-party hacker, allegedly operating under the directives of the opposing party, had successfully stolen highly confidential information from both the party’s legal counsel and the arbitral tribunal itself.

Consequently, the lesson is unequivocal: no one is immune to cyberattacks in the modern digital arena. The utilization of technologies that streamline access to information inherently broadens the attack surface for unauthorized interception, demanding extreme caution.[19]

The Generative AI Trap: Data Retention and Arbitral "Hallucinations"

Beyond the threat of external hackers, the very architecture of AI poses internal risks. It is a well-documented reality that platforms like ChatGPT retain and process user-inputted information to continuously train and refine their underlying models. Alarmingly, there have been instances where proprietary code or confidential data fed into the system was subsequently regurgitated in responses generated for entirely different users from other organizations.[20] Therefore, the advent and widespread adoption of generative AI tools in international arbitration render the risk of data breaches almost inevitable.

As a direct consequence, the compromise of personal data and trade secrets casts a long, inescapable shadow over public policy considerations and the perceived impartiality of the arbitrator. Ultimately, this chain reaction leads straight to the annulment of the arbitral award, ironically causing the exact protracted delays that AI was supposedly deployed to eliminate.

The pressing question then becomes: is any arbitrator truly willing to take such a perilous gamble?

Astonishingly, empirical practice reveals that the answer is yes. A prime example is the ongoing litigation in LaPaglia v. Valve Corp. In this case, the claimant petitioned to vacate an arbitral award based on the explosive allegation that the arbitrator, who was reportedly rushing to catch a flight, relied on Artificial Intelligence to draft the final award in a desperate bid to save time. According to the claimant's petition, the resulting award was riddled with factual inaccuracies and cited entirely fabricated evidence (a phenomenon technically known in the AI industry as "hallucinations"). This catastrophic error implies that the arbitrator completely abdicated their adjudicative function, outsourcing their intellectual judgment to a machine.[21] While a final judicial ruling on the annulment is still pending, the undeniable fact remains: the enforcement of the arbitral award has been paralyzed, and the fundamental integrity of the arbitrator has been severely compromised.

The Illusion of Impartiality: Prompts, Poisoned Data, and the Feedback Loop

Furthermore, it is imperative to address the issue of impartiality, an inseparable pillar of arbitration alongside confidentiality. In the aforementioned LaPaglia scenario, the human arbitrator’s independence was compromised. But looking deeper, a critical question arises: how objectively does the AI itself actually "reason" during the decision-making process?

We have all likely experienced the sensation that AI simply tells us exactly what we want to hear. Ultimately, everything hinges on the formulation of the prompt. It is entirely plausible that an arbitrator could input a leading question, heavily weighted toward one party's position (for example: "Explain why the Claimant's prayers for relief should be granted"). Under such heavily biased parameters, generating an objective and impartial award is practically impossible.

Moreover, generative AI scours the vast expanse of the internet to synthesize its answers. Frequently, the scraped data is erroneous, obsolete, entirely unreliable, or fundamentally misinterpreted, fatal flaws that drastically degrade the legal reasoning and quality of the award. As the LaPaglia case demonstrates, AI possesses a notorious tendency to fabricate facts out of thin air (so-called "hallucinations").

This vulnerability opens the door to a dangerous new litigation tactic. Because the internet is an open forum, an unscrupulous party is not restricted from publishing a highly convincing, yet entirely fabricated, legal article online. This article could boldly validate their legal position by citing phantom precedents or twisted interpretations of actual case law. When the AI inevitably scrapes this manipulated data, it is stripped of its ability to make an objective decision. It blindly accepts the "fake" article as a credible source, basing its arbitral award on a lie. In academic literature, this terrifying phenomenon is identified as the "Feedback Loop", a systemic flaw where AI continuously recycles subjective, unverified human opinions, presenting them as objective reality.[22]

To illustrate the catastrophic potential of this bias, we can look to an anecdote shared by the renowned scholar William Park. He recalls a case involving Italian parties where an arbitrator bluntly remarked: "In these types of cases, Italians always lie and say whatever suits their interests."[23] If such prejudiced rhetoric were ever formalized in a text and subsequently digested by an AI model, the algorithm would internalize the biased "rule" that all Italians are liars. This would trigger a systemic collapse of fairness in any future dispute involving an Italian party. Entrusting decision-making to AI currently carries monumental risks, as the technology is easily manipulated by both the prompter and the toxic biases lurking on the internet, thereby fundamentally destroying the principles of independence and impartiality.

The Death of Precedent and the Fossilization of Law

But let us entertain a theoretical utopia for a moment. Suppose all the aforementioned risks are eradicated: the AI is a perfectly objective arbitrator, and the threat of data breaches is completely neutralized. Would such an entity actually advance the practice of arbitration?

The emphatic answer is no. We can assert with certainty that arbitral proceedings and the resulting awards are as unique, fluid, and progressive as a piece of art or a living organism. Every arbitral award is the product of a specific human arbitrator’s individual cognition and interpretation. While it may share similarities with past cases, the entire jurisprudential weight and legal value of a decision often hinge on a single, unconventional argument that carves out a new precedent.

Throughout the history of international arbitration, numerous landmark decisions were initially dismissed as radical anomalies that contradicted established practice at the time (a prime example being Urbaser S.A. and Consorcio de Aguas Bilbao Bizkaia, Bilbao Biskaia Ur Partzuergoa v. The Argentine Republic, ICSID Case No. ARB/07/26). Currently, it is an undeniable fact that AI is entirely devoid of the capacity to create anything genuinely novel or unique. It merely mines existing historical data and synthesizes it into an aggregate final product.[24]

If arbitral awards were exclusively generated by Artificial Intelligence, we would witness a grim reality. Every decision would be a cookie-cutter replica of the last, strictly anchored to historical cases. Practically no decision would dare to deviate from statistical norms to forge a progressive legal path. Consequently, the evolution of law would freeze in the past, completely paralyzed and unable to adapt to new, unforeseen socio-economic realities.

 

 

Conclusion

Ultimately, it is an undeniable truth that the march of technological progress cannot be halted. As it evolves, Artificial Intelligence increasingly simplifies our workflows and modern lifestyles, a momentum we must undoubtedly harness to our advantage. However, while AI remains in its developmental infancy, it is entirely premature to claim it will render any legal profession obsolete, least of all the deeply nuanced and profoundly human role of an arbitrator.

It is entirely plausible that AI will brilliantly streamline and accelerate arbitral proceedings, democratize access to alternative dispute resolution, and ultimately alleviate the severe backlogs burdening domestic courts. Nevertheless, this newfound efficiency must never be purchased at the expense of arbitration’s foundational principles. The solution, of course, does not lie in a draconian, blanket ban on Artificial Intelligence. Instead, it demands the establishment of strict, mandatory regulations binding upon all disputing parties.

The arbitral process must remain uncompromisingly transparent: parties have an absolute right to know exactly to what extent, and for what specific tasks, the tribunal is deploying AI. Furthermore, the final arbitral award must invariably be authored by a human mind. This is the only safeguard to circumvent the perilous algorithmic "hallucinations" and toxic biases detailed in this article. Failure to enforce this human mandate will inevitably trigger a systemic collapse, rendering the judicial enforcement of arbitral awards an absolute nightmare.

For this precise reason, the "human element", the cognitive elasticity to see beyond rigid parameters, to exercise empathy, and to craft a uniquely tailored legal product, ensures that the role of the human arbitrator remains undeniably irreplaceable. AI is destined to serve as a brilliant assistant, but never the master.

The title of this article is no coincidence. After reading this analysis, one might hastily conclude that AI is a seductive, comfortable solution that unfortunately drags a myriad of inescapable risks in its wake. But, as ancient mythology reminds us, the tragedy of Troy was not caused by the wooden statue itself; it was caused by the naive acceptance, the hubris, and the sheer negligence of the people who willingly opened their gates.

Similarly, Artificial Intelligence will only become a destructive "Trojan Horse" for international arbitration if we carelessly allow it inside our walls, unregulated, unchecked, and uncontrolled.

 

 

Salome Mdivani

Free University of Tbilisi

Law School

Senior year

30.11.2025



[1] Silicon Valley Arbitration and Mediation Center. Guidelines on the Use of ARTIFICIAL intelligence in Arbitration. 1st ed. (April 30, 2024)., იხ, <https://svamc.org/wp-content/uploads/SVAMC-AI-Guidelines-First-Edition.pdf>

[2] David L. Evans; Stacy Guillon; Ralph Losey; Valdemar Washington; Laurel G. Yancey. Dispute Resolution Enhanced: How Arbitrators and Mediators Can Harness Generative AI. Dispute Resolution Journal 78, no. 1 (2024): 57-92. HeinOnline.

[3] Yulia Razmetaeva; Natalia Satokhina. "AI-Based Decisions and Disappearance of Law." Masaryk University Journal of Law and Technology 16, no. 2 (2022): 256.

[4] Ghazal Bhootra; Ishan Puranik. "Arbi(Traitor)?: A Case against AI Arbitrators." Indian Arbitration Law Review 4 (2022): 40.

[5] Uncitral model law on International Commercial Arbitration (1985), with amendments as adopted in 2006, Art. 19 (1).

[6] Ibid., Art. 19 (2)

[7] Singapore International Arbitration Centre (SIAC), Arbitration Rules of the Singapore International Arbitration Centre (6th Edition, 1 August 2016), Rule 19.1

[8] Ibid., Rule 19. 2

[9] Francisco Uribarri Soares. "New Technologies and Arbitration." Journal of Arbitration Law 7, no. 1 (2018): 86.

[10] Chicago 17th ed. Zachary Henderson. "AI and Probabilistic Dispute Resolution." Wisconsin Law Review 2025, no. 1 (2025): 220.

[11] Ben Davies. "Artificial Intelligence & FINRA Arbitration Awards: Utilizing AI and Arbitral Analytics to Uncover FINRA Arbitration Award Patterns." Arbitration Law Review 16 (2025): [iv]-36.

[12] District Court of the Hague, Veteran Petroleum Limited, Yukos Universal Limited and Hulley Enterprises Limited v. The Russian Federation, Judgement of 20 April 2016, PCA Case No. 2005-05/AA228

[13] Cristina Ioana Florescu. "The Interaction between AI (Artificial Intelligence) and IA (International Arbitration): Technology as the New Partner of Arbitration." Romanian Arbitration Journal / Revista Romana de Arbitraj 18, no. 1 (2024): 71.

[14] Michelle Magal, Kate Limond, and Andrew Calthrop, "Artificial Intelligence in Arbitration: Evidentiary Issues and Prospects," in The Guide to Evidence in International Arbitration (London: Global Arbitration Review, 2023).

[15] Chicago 17th ed. Francisco Uribarri Soares. "New Technologies and Arbitration." Journal of Arbitration Law 7, no. 1 (2018): 96.

[16] Ibid., 308

[17] Bose, Shatrunjay, and Hongwei Dang. "Arbitration Tech Toolbox: Training Arbitration Practitioners to Resist Cyber Attacks." Kluwer Arbitration Blog, October 2, 2022., იხ. < http://arbitrationblog.kluwerarbitration.com/2022/10/02/arbitration-tech-toolbox-training-arbitration-practitioners-to-resist-cyber-attacks/ >

[18] Svea Court of Appeal, Gela Mikadze and others v. Ras Al Khaimah Investment Authority, Judgment of 29 October 2019, Case No. T 280-18.

[19] Malekela, MS.A. AI and Confidentiality protection in International Commercial Arbitration: Analysis of the existing legal framework. Discov Artif Intell 5, 83 (2025)., იხ, <https://doi.org/10.1007/s44163-025-00316-7> 

[20] Ibid

[21] Petition to Vacate Arbitration Award; Memorandum of Points and Authorities in Support Thereof at 2, LaPaglia v. Valve Corp., No. 3:25-cv-00833 (S.D. Cal. Apr. 8, 2025).

[22] Ghazal Bhootra; Ishan Puranik. "Arbi(Traitor)?: A Case against AI Arbitrators." Indian Arbitration Law Review 4 (2022): 40.

[23] William Park, Arbitrator Bias (2015) No. 15-39 Boston University School of Law, Public Law Research Paper., იხ, < https://scholarship.law.bu.edu/facultyscholarship/15/  > [30.11.2025]

[24] Murray, Michael. (2023). Tools Do Not Create: Human Authorship in the Use of Generative Artificial Intelligence. 10.13140/RG.2.2.13795.53283.

 

სალომე მდივანი

Consult the Author

Have questions about this topic? Get professional advice.

Not sure if this is the right service?

Tell us your situation — we'll point you in the right direction.

Find a Specialist
Find the right expert for you
Email
contact@legal.ge
Phone
+995 551 911 961
WhatsAppViber