artificial intelligence Archives - Legal Cheek https://www.legalcheek.com/tag/artificial-intelligence/ Legal news, insider insight and careers advice Tue, 02 Jul 2024 07:45:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.legalcheek.com/wp-content/uploads/2023/07/cropped-legal-cheek-logo-up-and-down-32x32.jpeg artificial intelligence Archives - Legal Cheek https://www.legalcheek.com/tag/artificial-intelligence/ 32 32 Warfare technology: can the law really referee? https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/ https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/#comments Tue, 02 Jul 2024 07:45:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=206395 Harriet Hunter, law student at UCLan, explores AI's impact on weaponry and international humanitarian law

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>

Harriet Hunter, law student at the University of Central Lancashire, explores the implications of AI in the development of weaponry and its effect on armed conflict in international humanitarian law


Artificial Intelligence (AI) is arguably the most rapidly emerging form of technology in modern society. Almost every sector and societal process has been or will be influenced by artificially intelligent technologies and the military is no exception. AI has firmly earned its place as one of the most sought-after technologies available for countries to utilise in armed conflict, with many pushing to test the limits of autonomous weapons. The mainstream media has circulated many news articles on ‘killer robots, and the potential risks to humanity — however the reality of the impact of AI on the use of military-grade weaponry is not so transparent.

International humanitarian law (IHL) has been watching from the sidelines since the use of antipersonnel autonomous mines back in the 1940s, closely monitoring each country’s advances in technology and responding to the aftereffects of usage.

IHL exists to protect civilians not involved directly in conflict, and to restrict and control aspects of warfare. However, autonomous weapons systems are developing faster than the law  — and many legal critics are concerned that humanity might suffer at the hands of a few. But, in a politically bound marketplace, is there any place for such laws, and if they were to be implemented, what would they look like, and who would be held accountable?

Autonomous weapons and AI – a killer combination?

Autonomous weapons have been a forefront in military technology since the 1900’s – playing a large part in major conflicts such as the Gulf War. Most notably, the first usage of autonomous weapons was in the form of anti-personnel autonomous mines. Anti-personnel autonomous mines are set off by sensors – with no operator involvement in who is killed;  inevitably causing significant loss of civilian life. This led to anti-personnel autonomous mines being banned under the Ottawa treaty 1997. However, autonomous weapon usage had only just begun.

In the 1970’s autonomous submarines were developed and used by the US navy, a technology which was subsequently sold to multiple other technologically advanced countries. Since the deployment of more advanced AI, the level of weapons that countries have been able to develop has led to a new term being coined: ‘LAWS’. Lethal Autonomous Weapons Systems (LAWS)  are weapons which use advanced AI technologies to identify targets and deploy with little to no human involvement.

LAWS are, in academic research, split into three ‘levels of autonomy’ – each characterised by the amount of operator involvement that is required in their deployment. The first level is ‘supervised autonomous weapons’ otherwise known as ‘human on the loop’ — these weapons allow human intervention to terminate engagement. The second level is ‘semi-autonomous weapons’ or ‘human in the loop’, weapons that once engaged will enact pre-set targets. The third level is ‘fully autonomous weapons’ or ‘human out of the loop’, where weapons systems have no operator involvement whatsoever.

LAWS rely on advances in AI to become more accurate. Currently, there are multiple LAWS either in use or in development, including:

  • The Uran 9 Tank, developed by Russia, which can identify targets and deploy without any operator involvement.
  • The Taranis unmanned combat air vehicle being developed in the UK by BAE Systems, an unmanned jet which uses AI programmes to attack and destroy large areas of land with very minimal programming

The deployment of AI within the military has been far reaching. However, like these autonomous weapons, artificial intelligence is increasingly complex, and its application within military technologies is no different. Certain aspects of AI have been utilised more than others. For example, facial recognition can be used on a large scale to identify targets within a crowd. Alongside that, certain weapons have technologies that can calculate the chances of hitting a target, and of hitting a target the second time by tracking movements — which has been utilised in drone usage especially to track targets when they are moving from building to building.

International humanitarian law — the silent bystander?

IHL is the body of law which applies during an armed conflict. It has a high extra-territorial extent and aims to protect those not involved in the practice of conflict, as well as to restrict warfare and military tactics. IHL has four basic tenets; ensuring the distinction between civilian and military, proportionality (ensuring that any military advances are balanced between civilian life and military gain), ensuring precautions in attack are followed, and the principle of ‘humanity’. IHL closely monitors the progress of the weapons that countries are beginning to use and develop, and are (in theory) considering how the use of these weapons fits within their principles. However, currently the law surrounding LAWS is vague. With the rise of LAWS, IHL is having to adapt and tighten restrictions surrounding certain systems.

Want to write for the Legal Cheek Journal?

Find out more

One of its main concerns surrounds the rules of distinction. It has been argued that weapons which are semi, or fully autonomous (human in the loop, and out of the loop systems) are unable to distinguish between civilian and military bodies. This would mean that innocent lives could be taken at the mistake of an autonomous system. As mentioned previously, autonomous weapons are not a new concept, and subsequent to the use of antipersonnel autonomous mines in the 1900s,  they were restricted due to the fact that there was no distinction between civilians ‘stepping onto the mines’, and military personnel ‘stepping onto the mines. IHL used the rule of distinction to propose a ban which was signed by 128 nations in the Ottawa Treaty 1997.

The Marten’s clause, a clause of the Geneva Convention, aims to control the ‘anything not explicitly regulated is unregulated’ concept. IHL is required to control the development, and to a certain extent pre-empt the development of weapons which directly violate certain aspects of law. An example of this would be the banning of ‘laser blinding’ autonomous weapons in 1990 — this was due to the ‘laser blinding’ being seen as a form of torture which directly violates a protected human right; the right to not be tortured.  At the time, ‘laser blinding’ weapons were not in use in armed conflict, however issues surrounding the ethical implications of these weapons on prisoners of war was a concern to IHL.

But is there a fair, legal solution?

Unfortunately, the chances are slim. More economically developed countries can purchase and navigate the political waters of the lethal autonomous weapons systems market — whilst less economically developed countries are unable to purchase these technologies.

An international ban on all LAWSs has been called for, with legal critics stating that IHL is unable to fulfil its aims to the highest standard by allowing the existence, development and usage of LAWS. It is argued that the main issue which intertwines AI, LAWS and IHL, is the question – should machines be trusted to make life or death decisions?

Even with advanced facial recognition technology — critics are calling for a ban, as no technology is without its flaws — therefore how can we assume systems such as facial recognition are fully accurate? The use of fully autonomous (human out of the loop) weapons, where a human cannot at any point override the technology – means that civilians are at risk. It is argued that this completely breaches the principles of IHL.

Some legal scholars have argued that the usage of LAWS should be down to social policy — a ‘pre-emptive governing’ of countries who use LAWS. This proposed system allows and assists IHL in regulation of weapons at the development stage – which, it is argued, is ‘critical’ to avoiding a ‘fallout of LAWS’ and preventing humanitarian crisis. This policy would hold developers to account prior to any warfare. However, it could be argued that this is out of the jurisdiction of IHL which is only applied once conflict has begun — this leads to the larger debate of what the jurisdiction of IHL is, in comparison to what it should be.

Perhaps IHL is prolonging the implementation of potentially life-saving laws due to powerful countries asserting their influence in decision making; these powerful countries have the influence to block changing in international law where the ‘best interests’ of humanity do not align with their own military advances.

Such countries, like the UK, are taking a ‘pro-innovation’ approach to AI in weaponry. This means that they are generally opposed to restrictions which could halt progress in the making. However, it has been rightly noted that these ‘advanced technologies’ under the control of terrorist organisations (who would not be bound to follow IHL) would have disastrous consequences. They argue that a complete ban on LAWS could lead to more violence than without.

Ultimately…

AI is advancing, and with this, autonomous weapons systems are too. Weapons are becoming more advantageous to the military – with technology becoming more accurate and more precise. International humanitarian law, continually influenced by political stances and economic benefit to countries, is slowly attempting to build and structure horizontal legislation. However, the pace at which law and technology are both developing is not comparative and concerns many legal critics. The question remains, is the law attempting to slow an inevitable victory?

Harriet Hunter is a first year LLB (Hons) student at the University of Central Lancashire, who has a keen interest in criminal law, and laws surrounding technology; particularly AI.

The post Warfare technology: can the law really referee? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/warfare-technology-can-the-law-really-referee/feed/ 1
Half of lawyers want self-regulation when it comes to AI https://www.legalcheek.com/2024/06/half-of-lawyers-want-self-regulation-when-it-comes-to-ai/ https://www.legalcheek.com/2024/06/half-of-lawyers-want-self-regulation-when-it-comes-to-ai/#respond Tue, 11 Jun 2024 07:18:45 +0000 https://www.legalcheek.com/?p=205927 New report highlights adoption concerns within the profession

The post Half of lawyers want self-regulation when it comes to AI appeared first on Legal Cheek.

]]>

New report highlights adoption concerns within the profession


Nearly half of UK lawyers believe the legal profession should self-regulate the use of artificial intelligence (AI), a new report has found.

This finding is part of a new report by Thomson Reuters, which examines the UK legal market and key trends affecting its performance.

Researchers found that 48% of lawyers in UK law firms and 50% of in-house lawyers want the legal profession to lead any regulatory programme related to the use of generative AI tools, such as ChatGPT.

Meanwhile, a little over a third (36%) of lawyers at UK firms believe government regulation is necessary, while the proportion of in-house lawyers who share this view is slightly higher at 44%.

In comparison, only 26% of lawyers in the US and Canada support government oversight of AI.

The research also found that over a quarter (27%) of legal professionals reported that their firm of department is using or planning to use generative AI, with top uses being document review, legal research, document summarisation, contract drafting and knowledge management.

The 2024 Legal Cheek Firms Most List

Interestingly, 38% of UK law firm respondents said they feared clients might object to their use of generative AI tools, despite no UK clients surveyed stating they had specifically requested firms not to use it.

The key barriers to the widespread adoption of AI tools identified include the potential for inaccurate responses, concerns about data security, and the need to comply with relevant laws and regulations.

Last month, Legal Cheek reported that Klarna, the Swedish fintech company providing payment processing services for online businesses, is encouraging its in-house lawyers to use ChatGPT to save time by creating first drafts of common types of contracts.

“The big law firms have had a really great business just from providing templates for common types of contract,” said Selma Bogren, the company’s senior managing legal counsel.”But ChatGPT is even better than a template because you can create something quite bespoke.”

Bogren went on to add that “instead of spending an hour starting a contract from scratch or working from a template,” she “can tweak a ChatGPT draft in about ten minutes.”

The post Half of lawyers want self-regulation when it comes to AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2024/06/half-of-lawyers-want-self-regulation-when-it-comes-to-ai/feed/ 0
Contracts on Monday, machine learning on Tuesday: The future of the LLB https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/ https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/#respond Tue, 07 May 2024 07:52:20 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=204490 Université Toulouse Capitole LLM student Sean Doig examines technology's impact on legal education and training

The post Contracts on Monday, machine learning on Tuesday: The future of the LLB appeared first on Legal Cheek.

]]>

Université Toulouse Capitole LLM student Sean Doig examines technology’s impact on legal education and training


No profession is immune to the intrusion of disruptive technologies. Inevitably, the legal profession is no exception, and the practice of law and the administration of justice has grown incredibly reliant on technology.

The integration of new legal technologies into legal services is driven by the incentive to provide more efficient, cost effective, and accessible services to its clients. Indeed, modern lawyers are implementing paperless offices and “cloud-based practice-management systems, starting up virtual law practices, and fending off challenges from document preparation services like Legal Zoom.”

Such profound change has even shaped new specialisms within the legal profession, including those known as ‘legal technologists’; a group of skilled individuals who can “bridge the gap between law and technology.” While the name suggests connotations of a ‘legally-minded coder’, the reality is that the majority of professional legal technologists lack any training or experience in both the practice of law and in the profession of engineering and technology management.

Legal technologists is a lucrative and growing niche, and it is insufficient for those professionals to lack the experience and knowledge in the practice of law if they are to develop sustainable legal technologies to assist the delivery of services to clients.

Indeed, disruptive technologies are constantly evolving, and with the rapid advancement of Artificial Intelligence (‘AI’) and the Metaverse, there is a need for immediate change as to the training of the next generations of legal minds. While this sort of fearmongering around obsolete skills and doomed professions is relatively commonplace among CEOs of AI companies, the need for upskilling and adaptability of lawyers has been reiterated by skeptical academics and legal professionals for years.

As early as the 1950’s, diction machines and typewriters changed the working practices of lawyers and legal secretaries. In the 1970’s, law firms began using computers and LexisNexis, an online information service, which changed the way legal teams performed research to prepare their cases. One of the more well-known ‘doomsayers’, Richard Susskind, whose book boldy — although perhaps rather prematurely – titled The End of Lawyers was published in 2008 — well before the era of ‘Suits’!

Despite Susskind’s earlier predictions of impending doom of the end of lawyers, the author’s subsequent book, Tommorrow’s Lawyers, surpasses the ordinary opinion that technology will remove jobs; instead, opts that technology will assist the work of professionals and more jobs will involve applying technological solutions to produce a cost-efficient outcome. Although technology is developing rapidly to assist professionals, Susskind identifies that there is a lack of enthusiasm among law firms to evolve their traditional practices. Conversely, the enthusiasm of law firms to incorporate technology is normally where AI or other technologies are able to boost profits and lower operating costs, rather than assisting the lawyer and delivering for the client.

The incentive for law firms to incorporate technology into their working practices is purely economical and fear oriented. Firms that do not incorporate technology will lose clients to those competitors that have efficient technological means at their disposal. There is little credible advice as to how firms can affectively alter their business model to integrate technology. After all, the billable hour is the crux of a law firm, and with AI speeding up historically slow and tedious work, its value is diminishing.

Without dwelling too much on the fundamentals of capitalism and its effectiveness as an economic system, it is important to note that technology companies — such as OpenAI and Meta – are mostly funded and motivated by shareholders. The rapid nature in the development of technology is to produce results and dividends for those shareholders. In order for the product to perform well economically, there is a rush to outdo competitors and to be disruptive in the market. If successful, the value of the company will increase, the value of the shares will increase, and the more equity the company will have to continue to grow.

This means that technology is advancing at a fast rate and is outpacing the technical skills of professionals. The cost of new technologies factors in the markup that tech companies seek to satisfy their shareholders and advance their research & development (R&D). As Susskind notes, the durability of small law firms will be put into question in the 2020’s against the rise of major commercial law firms that are able to afford to invest in competitive, new technologies.

What does this mean for law students? New skills are required to enter the new technological workforce, and those graduates that meet the skillset will be more in demand than the rest of their cohort. As a result, legal education must equally evolve to adequately prepare law students for working in technological law firms. As Susskind highlights: “law is taught as it was in the 1970’s by professors who have little insight into or interest in the changing legal marketplace”, and graduates are ill-prepared for the technological legal work that their employer is expecting from them.

Want to write for the Legal Cheek Journal?

Find out more

It should be noted that some graduate and post-graduate courses do exist to facilitate the teaching of some of the technological skills to prepare individuals for the new workplace. Indeed, for example, there is a simulation currently in use in a postgraduate professional course called the Diploma in Legal Practice at the Glasgow Graduate School of Law. Nevertheless, the idea here is that the burden should be placed on law schools and that technological skills should be taught at the earliest stage in order to best prepare graduates for the workplace of tomorrow.

Although it is argued that the original purpose of the LLB is to teach black letter law and the skills for legal practice should be left for post-graduate legal training, this neglects those law students who do not wish to pursue the traditional post-graduate legal education; rather opting for an alternative career path in law.

In order for the value of an LLB to be upheld, it must adapt to meet the growing demand of the industry it serves. Its sanctity and popularity rests on its ability to be of use to any student seeking to have the best possible skills and, therefore, prospects in the job market. If the LLB is to survive, itself must compete with more attractive courses such as ‘Computer Science’, ‘Data Analysis’, and ‘Engineering’. It is not enough for law professors to continue to falsely assume that “students already get it”, or that if graduates work for a law firm then critical technology choices have been determined, “including case management software, research databases, website design, and policies on client communication.”

Furthermore, firms are “increasingly unwilling to provide training to incoming associates” and seek those graduates who already possess background knowledge. Undoubtedly, technology skills will elevate students’ employability, and those with tech skills will be in high demand by traditional law firms and by tech companies that service the legal industry.

While some law schools have been introducing “Legal Technology” or “Law and Technology” modules into their curriculums, it can be argued that they are insufficient to cover the array of specific skills that need to be taught, and are rather focusing merely on the impact of technology in the legal sector. The lack of innovation in law schools is placed on the lack of imagination on the part of law professors and its institutions; fearful of experimenting with the status quo of syllabises. Institutions with the courage to experiment with their curriculum to teach desirable skills in the legal market will attract and better serve a greater number of students for the new world of work.

Perhaps the most elaborate attempt to revolutionise legal education is the theoretical establishment of an MIT School of Law by author Daniel Katz. ‘MIT Law’ would be an institution that delivered a polytechnic legal education; focusing on “the intersection of substantive law, process engineering, computer science and artificial intelligence, design thinking, analytics, and entrepreneurship.” The institution would produce a new kind of lawyer; one that possessed the necessary skills to thrive in legal practice in the 21st century. With science, technology, engineering, and mathematics (“STEM”) jobs dominating the job market, there is an overlap into the legal market; giving rise to a prerequisite or functional necessity for lawyers to have technical expertise to solve traditional legal problems that are interwoven with developments in science and technology.

This hypothetical law school may seem far-fetched, but the underlining principle should be adapted to the modern LLB. Indeed, the curriculum should choose its courses upon the evaluation of the future market for legal services and adapt to the disruptive technologies becoming commonplace in the workplace. A hybrid of traditional law courses such as contract law, with more technical courses such as Machine Learning or E-Discovery should become the new normal to ensure the effective delivery of the best LLB of the future. Each course would be carefully evaluated in light of the current and future legal labour market to ensure that students are given the best possible chances after leaving the institution; whether they go on to post-graduate legal studies or not.

Sean Doig is an LLM student at Université Toulouse Capitole specialising in International Economic Law. He is currently working on his master’s thesis, and displays a particular interest in international law, technology and dispute resolution.

The post Contracts on Monday, machine learning on Tuesday: The future of the LLB appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/contracts-on-monday-machine-learning-on-tuesday-the-future-of-the-llb/feed/ 0
Super regulator to review guidance on lawyer AI training https://www.legalcheek.com/2024/05/super-regulator-to-review-guidance-on-lawyer-ai-training/ https://www.legalcheek.com/2024/05/super-regulator-to-review-guidance-on-lawyer-ai-training/#respond Thu, 02 May 2024 07:31:28 +0000 https://www.legalcheek.com/?p=204509 LSB writes to Lord Chancellor and tech minister

The post Super regulator to review guidance on lawyer AI training appeared first on Legal Cheek.

]]>

LSB writes to Lord Chancellor and tech minister


The legal profession’s super regulator, the Legal Services Board (LSB), is planning to review its guidance on AI training for lawyers, a newly published letter reveals.

The letter, authored by Richard Orpin, interim chief executive of the LSB, is directed to Technology Minister Michell Donelan MP and Lord Chancellor Alex Chalk MP, in response to a request from the Department for Science, Innovation, and Technology.

“We recognise the increasingly important role that technology, including AI, plays in society and its potential to improve the diversity and reach of legal services,” Orpin writes. He goes on to note that “greater reliance on AI in the production of legal advice has the potential to introduce additional risks to consumers of legal services”.

The 2024 Legal Cheek Firms Most List

The letter continues by stating that the LSB is planning to undertake a review of its existing guidance on regulatory arrangements for education and training.

The letter continues:

“This review is likely to include the consideration of how regulatory frameworks for education and training should reflect what requirements are necessary to ensure that legal professionals are competent in the use of technology, such as AI tools, at both the point of entry into the profession and throughout their careers.”

The letter also notes the role of the individual regulators in managing the future of AI usage. Whilst the existing guidance does not “specifically require” individual regulators to provide their own AI guidance, it does state that “individual regulators are best placed to assess these risks within their regulated communities and put in place mitigation strategies to address them in line with the outcomes in our guidance”.

The Solicitors Regulation Authority published its own guidance back in November last year looking both at the uses of AI in law, and the wide range of potential risks.

The post Super regulator to review guidance on lawyer AI training appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2024/05/super-regulator-to-review-guidance-on-lawyer-ai-training/feed/ 0
‘AI will do the heavy lifting so lawyers can do the heavy thinking’ https://www.legalcheek.com/lc-careers-posts/ai-will-do-the-heavy-lifting-so-lawyers-can-do-the-heavy-thinking/ Tue, 23 Apr 2024 10:12:45 +0000 https://www.legalcheek.com/?post_type=lc-careers-posts&p=204185 LexisNexis’ Matthew Leopold discusses its latest AI offering and how it will likely impact the legal industry

The post ‘AI will do the heavy lifting so lawyers can do the heavy thinking’ appeared first on Legal Cheek.

]]>

Ahead of his appearance at LegalEdCon 2024 next month, LexisNexis’ Matthew Leopold discusses its latest AI offering and how it will likely impact the legal industry

Matthew Leopold, Head of Brand and Insight at LexisNexis UK

“My specialism is to take a brand and challenge people’s assumptions about it,” says Matthew Leopold, Head of Brand and Insight at LexisNexis UK. “My job is to get people to feel more positive about a brand and engage with it in a different way.” Having built a specialism in brand management at tech companies, Leopold notes ironically that one of the biggest challenges of his role at LexisNexis is adapting a brand that almost every lawyer already knows. “Not only is it well known, but LexisNexis is a brand which lawyers understand and trust, and that’s because of the wealth of legal content that underpins the technology.”

Having evolved from a single database created by John Horty in 1956, LexisNexis has moved away from traditional publishing by becoming a global player in legal technology. Leopold is keen to stress the company’s tech-y credentials. “We ultimately create solutions that provide the right legal content at the right time; our technology helps people to find the diamond in the haystack of content,” says Leopold who will be speaking at LegalEdCon 2024 in London on 16 May. On the rewards of his role, he says, “it’s really interesting to be able to create cutting edge legal technology with this underlying, incredibly valuable, exceptionally well-trusted legal content.”

In response to developments in generative AI technology, LexisNexis developed and launched its own AI tool, Lexis+AI, at the end of 2023. Available in the UK in the coming weeks, Legal Cheek Careers was keen to ask Leopold for the key features of this new tech. “At launch, there are four main features that Lexis+AI is going to offer,” he tells us. “First is the conversational search feature. Imagine that you have a really knowledgeable associate sat on the desk next to you, and you can ask them a legal question and get a legal answer in response, pointing you in the direction of all the relevant information.” He continues, “the purpose of the conversational aspect of the search, means you can clarify, and ask a follow-up question to which Lexis+AI responds to your requests and refines answers.”

Find out more about LexisNexis UK

Explaining the benefits of this feature in terms of access to legal research, Leopold explains that, “the sorts of conversations that you would usually have with a human, you can now have with AI — and in this context, it allows you to really mine the depths of the law.” Grounded on the already expansive LexisNexis legal database, Lexis+AI can link you directly to relevant precedents, case law and practice notes within seconds. Leopold explains that this is key to reducing AI ‘hallucinations’ — circumstances where AI models produce nonsensical, falsified information. “We can minimise hallucinations as much as possible,” he says, “however linking directly to the content means that lawyers and students can quickly evaluate AI answers with their own eyes”.

The second key feature of this potentially industry-altering tech is its summarisation capabilities. Leopold notes that, “public access AI tools, such as ChatGPT  are not legally trained. They don’t understand the legal use-case for what it’s doing.” The difference with Lexis+AI is that “rather than producing a summary of a case, it presents a case digest which includes jurisdiction, key material facts, controlling law and more.”

LexisNexis will be exhibiting at LegalEdCon 2024 on 16 May

Lexis+AI also boasts drafting capabilities and the ability to upload your own documents for review. “Lexis+AI can help you draft clauses, form arguments, and create letters to clients,” Leopold explains. By integrating the features of the technology, both lawyers and students are able to extract information through conversational search. They can then prompt Lexis+AI to use this information to create legal arguments or letters. “It’s important to emphasise that this will result in a first draft,” Leopold stresses. “We do not proclaim that this is going to be the end result. You would always expect a senior to review the work of junior before it goes to a client. The same is true with AI-generated content.”

In that vein, we ask Leopold how he envisions the future of the legal industry with the introduction of generative AI, and where the boundaries between the lawyers and computers really lie. “AI is the next big frontier,” he says. “There is no avoiding it; it’s a matter of when not if. There are going to be fundamental changes to the legal market., Take the good old billable hour.  It is going to change. In a world where technology can do the heavy lifting of legal research in couple of seconds, the whole idea of charging by the hour becomes difficult to justify.” He predicts that we’re likely going to see an evolution towards value-based pricing in law firms, and more innovative fee structures, as firms transform with the implementation of AI.

“Historically, the legal industry has been a slow adopter of technology,” says Leopold, “This is the first piece of technology that is truly challenging the status quo. Law firms are now considering what this means for their core business and the skills that the lawyers of tomorrow will require. There is a very exciting and busy future ahead for lawyers and the whole legal industry.”

LegalEdCon 2024: Final release tickets on sale now

Following the idea that AI is paving the way for some dramatic shifts in the legal industry, we’re keen to hear Leopold’s thoughts on the differences between the role of a lawyer and the role of AI in legal research. “I think that both are the future, and that one can’t really exist without the other,” he says. “We are very clear that Lexis+AI is not created to replace a lawyer. Lawyers need to still be in the loop because they can identify legal context, and other concepts which cannot be trained into an AI model.” Similar issues are raised, Leopold continues, when one considers the human aspect of legal work, requiring negotiation skills, teamwork and often empathy. Ultimately, AI’s ability to reduce manual, administrative legal tasks is huge, leaving lawyers to focus on problem solving, according to Leopold. “AI will do the heavy lifting so that the lawyer can do the heavy thinking.”

Matthew Leopold, Head of Brand and Insight at LexisNexis UK, will be speaking at LegalEdCon 2024, Legal Cheek’s annual future of legal education and training conference, which takes place in-person on Thursday 16 May at Kings Place, London. Final release tickets for the Conference can be purchased here.

Find out more about LexisNexis UK

About Legal Cheek Careers posts.

The post ‘AI will do the heavy lifting so lawyers can do the heavy thinking’ appeared first on Legal Cheek.

]]>
Google invests £9.5 million in London law firm behind ‘AI paralegal’ which passed the SQE https://www.legalcheek.com/2024/04/google-invests-9-5-million-in-london-law-firm-behind-ai-paralegal-which-passed-the-sqe/ https://www.legalcheek.com/2024/04/google-invests-9-5-million-in-london-law-firm-behind-ai-paralegal-which-passed-the-sqe/#comments Thu, 18 Apr 2024 07:47:27 +0000 https://www.legalcheek.com/?p=204011 'Lawrence' scored 74% on mock test

The post Google invests £9.5 million in London law firm behind ‘AI paralegal’ which passed the SQE appeared first on Legal Cheek.

]]>

‘Lawrence’ scored 74% on mock test


The London law firm that created an ‘AI paralegal’ capable of passing part one of the Solicitors Qualifying Exam (SQE) has received £9.5 million of investment from Google.

Lawhive, founded in 2021, hit headlines last November when it’s AI-powered paralegal successfully completed SQE1, scoring 74% on the multiple choice sample questions available on the Solicitors Regulation Authority’s website.

The bot, dubbed ‘Lawrence’, successfully answered 67 of the 90 multiple choice sample questions, despite struggling with what the firm said were issues with “complex chains of logic and wider context”.

Jump forward a few months and Google Ventures, the venture capital arm of Google’s parent company Alphabet, has pumped £9.5 million into the firm.

The 2024 Legal Cheek Firms Most List

Commenting on the the need for AI in law, chief executive of Lawhive, Pierre Proner, said:

“The consumer legal market is totally broken and hasn’t really had an update in hundreds of years. It came out of personal experiences of really battling an airline while trying to get money back during Covid, and feeling totally cut out of the legal system. We went to some high street firms to see if they could help and it was far more expensive than was justified to pay.”

Lawrence, he says, keeps lawyers away from “repetitive legal work”, helping clients find cheaper solutions, whilst ensuring that they’re “not getting an AI chatbot, they are getting a real human lawyer working with them”.

On the new investment, Vidu Shanmugarajah, a partner at Google Ventures, said: “Lawhive not only dramatically improves legal workflows but also makes high quality legal advice more accessible and affordable to a broader audience.”

Earlier this month, one of the UK’s top judges, Lord Justice Birss, noted how “AI used properly has the potential to enhance the work of lawyers and judges enormously”. “It will democratise legal help for unrepresented people” he said, adding that “it can and should be a force for good.”

The post Google invests £9.5 million in London law firm behind ‘AI paralegal’ which passed the SQE appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2024/04/google-invests-9-5-million-in-london-law-firm-behind-ai-paralegal-which-passed-the-sqe/feed/ 11
AI ‘can be a force for good’, says top judge https://www.legalcheek.com/2024/04/ai-can-be-a-force-for-good-says-top-judge/ https://www.legalcheek.com/2024/04/ai-can-be-a-force-for-good-says-top-judge/#comments Wed, 03 Apr 2024 12:37:31 +0000 https://www.legalcheek.com/?p=203225 Help those who can't afford lawyers

The post AI ‘can be a force for good’, says top judge appeared first on Legal Cheek.

]]>

Help those who can’t afford lawyers


One of the UK’s top judges has said that artificial intelligence (AI) “can and should be a force for good” within the justice system.

Delivering his ‘Future Visions of Justice‘ speech last week at King’s College London Law School, Lord Justice Birss outlined a range modern tech already in use within the justice system, whilst also offering his thoughts on the future and the role that AI may have.

“My own view is that AI used properly has the potential to enhance the work of lawyers and judges enormously,” he said, before adding, “I think it will democratise legal help for unrepresented people.”

“I think it can and should be a force for good. And I think it will be as long as it is done properly and appropriately,” Birss LJ continued.

Three key uses of AI were identified by the metallurgy and materials sciences graduate.

First, already mentioned, is the potential for AI to democratise legal advice and assistance. Whilst it may not be entirely accurate or foolproof, studies suggest it is more useful and accurate than an internet search, with the potential for further progression, Birss LJ noted.

Alongside this, there are also uses in providing case summaries to judges, which again do not require complete accuracy, but instead offer an initial outline and guide, and in large scale document review.

The 2024 Legal Cheek Firms Most List

A number of law firms are already using AI tools for these purposes, with both Macfarlanes and Allen & Overy employing AI bot ‘Harvey’ to assist their lawyers.

The growth in AI won’t necessarily mark the end for lawyers, however. “The fact that something can be done does not always mean it should be done,” Birss LJ continued. “When one thinks about the rule of law and access to justice, a critical aspect is public trust in the legal system itself. I can’t imagine a legal system which does not have people at its heart as key representatives and decision makers.”

Elsewhere in his speech the top judge noted a range of other modern tech already in use within the justice system, most notably automatic algorithms in certain civil debt claims. Whilst the old system gave an algorithm for people to apply in particular cases to determine payment plans, the new system does the same thing, only automatically and digitally.

“The safeguard was and remains that anyone dissatisfied with the order made as a result of applying the formula is entitled to apply to a judge.” he said. “As far as I know this is caused no difficulty of any sort and attracted very little comment.”

There has also been a shift to paperless cases, which, despite some “initial minor teething troubles of a technical nature”, has worked “extremely well”, and digital work allocation for judges.

The post AI ‘can be a force for good’, says top judge appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2024/04/ai-can-be-a-force-for-good-says-top-judge/feed/ 1
EU parliament approves groundbreaking regulations on AI https://www.legalcheek.com/2024/03/eu-parliament-approves-groundbreaking-regulations-on-ai/ Fri, 15 Mar 2024 06:16:41 +0000 https://www.legalcheek.com/?p=202525 Potential framework for UK to follow

The post EU parliament approves groundbreaking regulations on AI appeared first on Legal Cheek.

]]>

Potential framework for UK to follow


The European Parliament has endorsed a new legal framework concerning the use of artificial intelligence (AI), creating a potential pathway for the UK to follow.

After MEPs voted overwhelmingly in favour of the new laws earlier this week, 523 to 46, certain AI uses look set to be banned, with permitted tech to come under varying levels of scrutiny.

“All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour,” an article on the European Commission’s site reads.

For those programmes and uses that fall below this level, the legislation plans to divide AI into three tiers; high risk, limited risk, and minimal risk.

The high risk category includes AI used in critical infrastructure, education, safety components, employment processes, law enforcement, migration, and the administration of justice. For AI to be permitted in these areas it must first meet a number of obligations focussed on risk assessment and mitigation, security, and human oversight.

The 2024 Legal Cheek Firms Most List

Limited risk focuses on the dangers associated with a lack of transparency in AI usage. These uses will again be subject to obligations, albeit less stringent and focussed on ensuring that people are aware of the use of AI, for example with the labelling of AI chatbots and AI generated text, video, and images.

Those uses considered minimal risk will be unaffected. This category, the Commission says, covers “the vast majority of AI systems currently used in the EU”, and includes AI video games and spam filters.

This comprehensive framework would be the first of its kind, and could provide a pathway for the UK to follow.

Earlier this week the Solicitors Regulation Authority issued a warning to lawyers over the potential fabrication of deepfake clients using AI. This followed previous guidance highlighting the benefits and risks of AI given to solicitors, barristers, and judges over recent months.

The post EU parliament approves groundbreaking regulations on AI appeared first on Legal Cheek.

]]>
Beware of ‘deepfake’ clients, regulator warns lawyers https://www.legalcheek.com/2024/03/beware-of-deepfake-clients-regulator-warns-lawyers/ Wed, 13 Mar 2024 07:53:16 +0000 https://www.legalcheek.com/?p=202411 Concerns over money laundering and terrorist financing

The post Beware of ‘deepfake’ clients, regulator warns lawyers appeared first on Legal Cheek.

]]>

Concerns over money laundering and terrorist financing


The Solicitors Regulation Authority (SRA) has issued a new warning about the risk posed by artificial intelligence (AI) to the legal profession in the form of ‘deepfake’ technology.

As part of their regular risk assessments for anti-money laundering and terrorist financing, the SRA has highlighted the potential risks of deepfake technology alongside other emerging and existing issues.

“Not meeting a client face-to-face can increase the risk of identity fraud and without suitable mitigation such as robust identity verification may help facilitate anonymity,” the warning states.

Whilst “not meeting face-to-face may make sense in the context of a given transaction or wider context… where clients appear unnecessarily reluctant or evasive about meeting in person, you should consider whether this is a cause for concern.”

The 2024 Legal Cheek Firms Most List

Firms are also told to be aware of the use of AI to create so-called ‘deepfakes’, which can impersonate a real person’s appearance convincingly.

“This increases the risk of relying on video calls to identify and verify your client. If you only meet clients remotely, you should understand whether your electronic due diligence protects you against this, or to explore software solutions to assist in detecting deepfakes,” the SRA adds.

In a speech last week the second most senior judge in England and Wales, Sir Geoffrey Vos, highlighted the continued growth of AI in the legal profession, and its potential for further expansion.

“One may ask rhetorically whether lawyers and others in a range of professional services will be able to show that they have used reasonable skill, care and diligence to protect their clients’ interests if they fail to use available AI programmes that would be better, quicker and cheaper,” Los said.

Noting also the potential use of tech in judicial decisions, he added:

“I will leave over the question of whether AI is likely to used for any kind of judicial decision-making. All I would say is that, when automated decision-making is being used in many other fields, it may not be long before parties will be asking why routine decisions cannot be made more quickly, and subject to a right of appeal to a human judge, by a machine. We shall see.”

Last month Shoosmiths became one of the first law firms to offer guidance to students on the use of AI when making training contract and vacation scheme applications.

The post Beware of ‘deepfake’ clients, regulator warns lawyers appeared first on Legal Cheek.

]]>
1 in 5 students use AI to help with training contract and pupillage apps https://www.legalcheek.com/2024/02/1-in-5-students-use-ai-to-help-with-training-contract-and-pupillage-apps/ Wed, 28 Feb 2024 08:00:00 +0000 https://www.legalcheek.com/?p=202472 Flash poll

The post 1 in 5 students use AI to help with training contract and pupillage apps appeared first on Legal Cheek.

]]>

Legal Cheek flash poll


One in five students say they have used artificial intelligence (AI) to assist with their training contract, vacation scheme and pupillage applications.

A recent Legal Cheek poll asked students on LinkedIn whether they had used AI platforms such as ChatGPT to help write applications for their dream legal roles. Of the 1,303 who responded, 268 (21%) confirmed that they had done so.

Last week Shoosmiths became one of the first firms to issue guidance on how to use AI when writing applications. Whilst the advice notes that there are a range of potential uses, namely in aiding time management, proofreading, and suggesting amendments to draft answers, the firm were clear that AI should not replace original and unique perspectives.

At the bar, meanwhile, the most recent pupillage recruitment cycle saw candidates applying through the Pupillage Gateway required to confirm that they had not used AI to write their answers.

Applications must be the “sole creation and original work” of the budding barrister, the statement read, with students required to accept that, “any application which has been written with the use of any generative AI LLMs like ChatGPT” will be excluded from the shortlisting process.

Whilst AI can be used in a range of ways to boost productivity and improve existing text, students should be aware of the risks of over reliance on imperfect tech. Last year two US lawyers were fined for relying on cases fabricated by ChatGPT, whilst a litigant in person in the UK received a telling off for citing nine fake authorities to a tax tribunal.

Over the past few months solicitors, barristers, and judges have all received guidance on the use of AI in their respective practices, with over a quarter of lawyers saying that they use AI on a regular basis.

The post 1 in 5 students use AI to help with training contract and pupillage apps appeared first on Legal Cheek.

]]>
How will AI impact junior lawyers? https://www.legalcheek.com/2024/01/could-robots-replace-junior-lawyers-2/ Fri, 05 Jan 2024 08:50:52 +0000 https://www.legalcheek.com/?p=199513 Solicitor Baljinder Singh Atwal examines some of the main concerns

The post How will AI impact junior lawyers? appeared first on Legal Cheek.

]]>

Solicitor Baljinder Singh Atwal examines some of the main concerns


For anyone sceptical of new technology, this is for you.

My previous article explored some of the exciting possibilities of AI within law. From high tech legal research to extremely accurate executive summaries and public databases that could save us a vast amount of time and energy. With the gentle encouragement of the comments section, I now examine some of the main concerns and drawbacks with AI in the workplace.

Trust

With any new technology or method, there will be some reluctance and hesitation around fully implementing it in legal practice. Only once we have some of the most trusted brands and organisations leading the way with AI, do I think it will truly catch on. Consider what happened with virtual events and working during the pandemic — these transformed from tools rarely utilised, to a core practice that has reconceptualised the way we work. Similarly, once the trust is established with AI tools through a process of trying and testing, these are likely to change things forever.

Technology gap

As some organisations and law firms embrace technology quicker than others, we may see a gap in technology which will impact competition, clients, recruitment, retention and more. The improvement of small processes across large organisations will create streamlined work practices: from filling in a form to drafting an email, to legal research and creating a presentation. Through the pandemic, this was also seen where some legal teams adapted very quickly to being able to work remotely, signing documents electronically and conducting meetings with several people in different locations.

Data protection

The use and implementation of AI will often utilise large data sets which may have access to or include very sensitive information. The initial link between data and AI will need to be considered carefully as the growth of technology will inevitably increase cyber attacks, hacking attempts, fraud and more. In an increasingly connected world, international data transfers, privacy and storage will all need to be assessed.

Skills

As we become more reliant on technology and give AI the responsibility of basic admin tasks within the workplace, we may see some skills and practices slowly erode. Some similar examples can be seen through electronic maps and GPS when driving from the traditional skill set of map reading (if anyone reading this is old enough to remember an A to Z). Closer to the office environment was the transition from the written letter to an email. I think that the full implementation of AI at its best may heavily impact: basic legal research, minute/note taking, marketing/branding, preparing first drafts of documents and recruitment methods.

With the new year having been ushered in, it will be interesting to see how AI will change the legal profession and certain sectors. The pandemic brought virtual and remote working into everyone’s working life very quickly. Our knowledge and understanding of how powerful technology can be may give us that encouragement to implement AI quicker despite the above.

Baljinder Singh Atwal is an in-house solicitor at West Midlands Police specialising in commercial and property matters. He is co-chair of the Birmingham Solicitors’ Group and a council member at The Law Society representing junior lawyers nationally.

The post How will AI impact junior lawyers? appeared first on Legal Cheek.

]]>
How one engineer is helping lawyers build robots https://www.legalcheek.com/lc-careers-posts/how-one-engineer-is-helping-lawyers-build-robots/ Tue, 12 Dec 2023 10:16:30 +0000 https://www.legalcheek.com/?post_type=lc-careers-posts&p=198594 Pinsent Masons technical lead talks all things AI

The post How one engineer is helping lawyers build robots appeared first on Legal Cheek.

]]>

Pinsent Masons technical lead talks all things AI

“I studied engineering at university and decided I didn’t want to be an engineer – but I liked being a student, so I went back and did a Master’s degree. This was all around the time that the internet showed up on the scene, so I knew I wanted to do something related to technology”, recounts Jason Barnes, low code development technical lead at Pinsent Masons.

Barnes joined the firm before it became Pinsent Masons, originally planning to work for a year or two as he thought up an idea for a PhD. But starting off in a general IT position, he was soon able to get involved in designing databases, often for niche legal work, something he found to be “quite good fun”. Subsequently, web applications came along, and that provided another avenue of interest. “I knew straightaway that this was what interested me, so I effectively became a web application developer. We started building web applications for clients and lawyers, and were met with a good degree of success, so we did more work and our team grew”, says Barnes.

Re-evaluating his career trajectory some years later, Barnes decided to move away from a full-on development role to explore the product management side of things. “In short, this involved looking at how software solutions can be implemented to make things easier for people and businesses”. But missing the creativity of being a developer, Barnes started to get involved with no code/low code tools, such as the Microsoft Power platforms, which came on to the market in a big way a couple of years ago. “I got quite excited with these and was convinced that this was a significant technology direction for us as a firm. Nobody else was spearheading this within Pinsent Masons, so I decided to — now I head up our low code team and am back to being a developer!”

The application deadline for Pinsent Masons’ 2024 Vacation Scheme is 13 December 2023

Responding to a question about his day-to-day, Barnes chuckles, saying, “most of the time I have to be stopped — I really do like my job!” He explains that low code tools are designed for non-developers to use and build applications.

“At professional firms, you’ve got, say, a large mass of lawyers who are lawyering and need solutions to help them do this. Now, you can go out to the market to buy these solutions, but for bigger, innovative firms, you want to do this yourself, so you can build exactly what you want. Now, a law firm will only have a certain number of developers, and even they can only do so much when everyone at the firm has an idea they want to see developed. Low code tools can step in and help those people with the ideas to do the development themselves, without having to wait for the developers to do it. So essentially, we’ve got lawyers building robots, although they might not always realise that that’s what they’re doing”.

Barnes sees this as a form of empowerment — with low code tools allowing lawyers to take charge of automating processes and eliminating the frustration of having to wait around for developers to take charge. He does point out, however, that while one can achieve quite a lot with these tools, there’s still some elements that are difficult to navigate, which is where his job steps in, as technical lead of low code development. “We’re there to help the people who have the ideas turn their ideas into solutions”, he summarises.

What’s the typical process through which AI is developed at a law firm like Pinsent Masons? “Despite having worked in technology my whole life, I still always start things off with a pen and paper. If you can’t draw what you want you want to build, then you’re not going to be able to build it”, he responds. Barnes also notes that while lawyers are usually great at articulating what it is they want to build, representing this in a diagrammatic form is often challenging. However, this is at the core of the developer mindset, so we can help with that”, he explains.

Barnes also speaks about the main challenges posed by artificial intelligence (AI) in the legal industry, and he points out that “very few people have a clear understanding of what we mean when we talk about AI”. With the vast majority of people building their views on what they see in the media, most exposure is to generative AI, such as ChatGPT — but that’s only one part of what AI actually is. When I talked earlier about a lawyer having an idea to automate a process, that’s also AI. It’s a computer system doing what a human would normally do. So, one of the challenges is really understanding what it is we’re talking about in the first place”, he explains.

“One of the things at the forefront of everyone’s mind is the protection of client data”, Barnes continues, on the topic of challenges associated with AI in law. “As law firms, nothing matters more than the integrity of our clients’ data — everybody is conscious of the risk of having large language models trained on data sets comprised of client data without prior client agreement”, he notes. On the flipside, Barnes notes that the greatest opportunity for AI in the legal industry is in reimagining the everyday, and taking the monotonous tasks off lawyers hands, so that they are freed up to tap into their human intelligence, to provide better legal services for clients. He offers up document extraction as a tangible example of where AI can have application.

Approaching the end of our conversation, Barnes offers his views on the ‘are lawyers going to be replaced by robots’ debate. “Part of me thinks, well yeah”, he laughs. “A lot of the work I do is around innovation, driving up quality and lowering the cost base. So, taking that to its logical conclusion, we could be looking at a world where we do things artificially across all industries and save a lot of money. But I don’t think anyone wants that”, observes Barnes. While quantifying things in terms of processes and diagrams is easy, and might foretell an automated future, he notes that this ignores the human element of interpersonal relationships which is crucial in the legal space. “I don’t think legal work can be reduced to a collection of ones and zeros”, he concludes.

The application deadline for Pinsent Masons’ 2024 Vacation Scheme is 13 December 2023

About Legal Cheek Careers posts.

The post How one engineer is helping lawyers build robots appeared first on Legal Cheek.

]]>
Could you be fired by a robot – and would UK anti-discrimination law protect you? https://www.legalcheek.com/lc-journal-posts/could-you-be-fired-by-a-robot-and-would-uk-anti-discrimination-law-protect-you/ https://www.legalcheek.com/lc-journal-posts/could-you-be-fired-by-a-robot-and-would-uk-anti-discrimination-law-protect-you/#comments Thu, 30 Nov 2023 07:49:14 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=197920 Cambridge Uni law grad Puja Patel analyses whether current anti-discrimination laws are fit for purpose in the wake of AI

The post Could you be fired by a robot – and would UK anti-discrimination law protect you? appeared first on Legal Cheek.

]]>

Puja Patel, University of Cambridge law graduate, offers an analysis into whether the UK’s current anti-discrimination laws are fit for purpose in the wake of AI


Imagine if popular BBC TV series, The Apprentice, had a robot instead of Lord Sugar sitting in the boardroom, pointing the finger and saying ‘you’re fired.’ Seems ridiculous, doesn’t it?

Whilst robots may not be the ones to point the finger, more and more important workplace decisions are being made by artificial intelligence (‘AI’) in a process called algorithmic decision-making (‘ADM’). Indeed, 68% of large UK companies had adopted at least one form of AI by January 2022 and as of April 2023, 92% of UK employers aim to increase their use of AI in HR within the next 12-18 months.

Put simply, ADM works as follows: the AI system is fed vast amount of data sets (‘training data’) upon which it models its perception of the world by drawing correlations between data sets and outcomes. These correlations then inform decisions made by the algorithm.

At first glance, this seems like the antithesis of prejudice. Surely a ‘neutral’ algorithm which relies only upon data would not discriminate against individuals?

Sadly, it would. Like an avid football fan who notices that England only scores when they are in the bathroom and subsequently selflessly spends every match on the toilet, ADM frequently conflates correlation with causation. Whilst a human being would recognise that criteria such as your favourite colour or your race are discriminatory and irrelevant to the question of recruitment, an algorithm would not. Therefore, whilst algorithms do not directly discriminate in the same way that a prejudiced human would, they frequently perpetrate indirect discrimination.

Unfortunately, this has already occurred in real life — both Amazon and Uber have famously faced backlash for their allegedly indirectly discriminatory algorithms. According to a Reuters report, members of Amazon’s team disclosed that Amazon’s recruitment algorithm (which has since been removed from Amazon’s recruitment processes) taught itself that male candidates were preferable. The algorithm’s training data, according to the Reuters report, comprised of resumes submitted to Amazon over a 10-year period, most of whom were men; accordingly, the algorithm drew a correlation between male CVs and successful candidates and so filtered CVs that contained the word ‘women’ out of the recruitment process. The Reuters report states that Amazon did not respond to these claims, other than to say that the tool ‘was never used by Amazon recruiters to evaluate candidates’, although Amazon did not deny that recruiters looked at the algorithm’s recommendations.

Want to write for the Legal Cheek Journal?

Find out more

Similarly, Uber’s use of Microsoft’s facial recognition algorithm to ID drivers allegedly failed to recognise approximately 20% of darker-skinned female faces and 5% of darker-skinned male faces, according to IWGB union research, resulting in the alleged deactivation of these drivers’ accounts and the beginning of a lawsuit which will unfold in UK courts over the months to come. Microsoft declined to comment on ongoing legal proceedings whilst Uber says that their algorithm is subject to ‘robust human review’.

Would UK anti-discrimination law protect you?

Section 19 of the Equality Act (‘EA’) 2010  governs indirect discrimination law. In simple terms, s.19 EA means that it is illegal for workplaces to implement universal policies which seem neutral but in reality disadvantage a certain protected group.

For example, if a workplace wanted to ban employees from wearing headgear, this would disadvantage Muslim, Jewish and Sikh employees, even though the ban applied to everyone – this would therefore be indirectly discriminatory, and unless the workplace could prove this was a proportionate means of achieving a legitimate aim, they would be in breach of s.19 EA.

But here’s the catch. The EA only applies to claimants from a ‘protected group’, which is an exhaustive list set out at s.4 EA: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.

The Amazon and Uber claimants fall into the protected categories of ‘sex’ and ‘race’ respectively. Therefore, the EA will protect them – in theory. In reality, it is very difficult to succeed in a claim against AI, as the claimants are required by the EA to causally connect the criteria applied by the algorithm with the subsequent disadvantage (e.g. being fired). It is often impossible for claimants to ascertain the exact criteria applied by the algorithm; even in the unlikely event that the employer assists, the employer themselves is rarely able to access this information. Indeed, the many correlations algorithms draw between vast data sets mean that an algorithm’s inner workings are akin to an ‘artificial neural network’. Therefore, even protected group claimants will struggle to access the EA’s protection in the context of ADM.

Claimants who are discriminated against for the possession of intersectional protected characteristics (e.g. for being an Indian woman) are not protected as claimants must prove that the discrimination occurred due to one protected characteristic alone (e.g. solely due to either being Indian or a woman). ‘Intersectional groups’ are therefore insufficiently protected despite being doubly at risk of discrimination.

And what about the people whom are randomly and opaquely grouped together by the algorithm? If the algorithm draws a correlation between blonde employees and high performance scores, and subsequently recommends that non-blonde employees are not promoted, how are these non-blonde claimants to be protected? ‘Hair colour’ is not a protected characteristic listed in s.4 EA.

And perhaps most worryingly of all — what about those individuals who do not know they have been discriminated against by targeted advertising? If a company uses AI for online advertising of a STEM job, the algorithm is more likely to show the advert to men than women. A key problem arises — women cannot know about an advert they have never seen. Even if they find out, they are highly unlikely to collect enough data to prove group disadvantage, as required by s.19 EA.

So, ultimately – no, the EA is unlikely to protect you.

 Looking to the future

It is therefore evident that specific AI legislation is needed — and fast. Despite this, the UK Government’s AI White Paper confirms that they currently have no intention of enacting AI-specific legislation. This is extremely worrying; the UK Government’s desire to facilitate AI innovation unencumbered by regulation is unspeakably destructive to our fundamental rights. It is to be hoped that, following in the footsteps of the EU AI Act and pursuant to the recommendations of a Private Member’s Bill, Parliament will be inclined to at least adopt a ‘sliding-scale approach’ whereby high-risk uses of AI (e.g. dismissals) will entail heavier regulation, and low-risk uses of AI (e.g. choosing locations for meetings with clients) will attract lower regulation. This approach would safeguard fundamental rights without sacrificing AI innovation.

Puja Patel is a law graduate from the University of Cambridge and has completed her LPC LLM. She is soon to start a training contract at Penningtons Manches Cooper’s London office. 

The post Could you be fired by a robot – and would UK anti-discrimination law protect you? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/could-you-be-fired-by-a-robot-and-would-uk-anti-discrimination-law-protect-you/feed/ 8
How I help clients navigate the world of AI https://www.legalcheek.com/lc-careers-posts/how-i-help-clients-navigate-the-world-of-ai/ Tue, 17 Oct 2023 07:48:42 +0000 https://www.legalcheek.com/?post_type=lc-careers-posts&p=195449 Bird & Bird senior associate Will Bryson discusses his work in the firm’s tech transactions team

The post How I help clients navigate the world of AI appeared first on Legal Cheek.

]]>

Bird & Bird senior associate Will Bryson discusses his work in the firm’s tech transactions team


“I really enjoy negotiating contracts that everyone is happy with,” says Will Bryson, senior associate in Bird & Bird’s tech transactions team. Having initially flirted with the idea of being an IP lawyer, Bryson quickly understood that he preferred the commercial tech space. “For the past few years, we have really been talking about artificial intelligence (AI) for its transformational impact on society. I am very passionate about solving legal challenges that lie at the heart of this change.”

As part of his role in the tech transactions team, Bryson often helps businesses looking to buy technology products. “Our clients do not always understand the complexities of the technological tools that they are acquiring and deploying. This is where tech lawyers step in,” he says.

Applications for Bird & Bird’s 2024 Spring and Summer Vacation Schemes are now open

For example, Bryson’s team recently helped a large consumer goods company in buying an Internet of Things (IoT) platform. Essentially, the client wanted to acquire the full tech stack that would underpin their software across all devices. At the end of this successful deployment, the client’s goal was to improve their functionality via a collective network of devices.

“Our role as lawyers in such transactions can often come in multiple capacities,” Bryson explains. “Often the clients want to adopt AI tools but are worried about its risks. They are unsure of what they can and cannot use the AI for. We help them understand the license and use terms, and work with them to understand the risk profile of the asset. This enables them to make an informed decision about internal use of the tool,” he says. A lot of Bryson’s clients have successfully used generative AI tools for various business functions, including things like marketing and branding.

Once the client decides to incorporate AI within their businesses, they might seek support to procure these tools from suppliers. “For such clients, we are involved in the procurement of relevant technologies. To this end, we would typically negotiate technology contracts between the buyers and sellers, making sure the terms work for our clients,” says Bryson.

Legal challenges with AI

But the negotiation of these technology contracts is far from simple, according to Bryson. There are a plethora of legal issues cutting across different areas of law.

“One of the primary issues is that of fault attribution — i.e., who takes the blame when things go wrong?” he says. “Generative AI like ChatGPT often tend to hallucinate, meaning that they can produce inaccurate or illogical results. The main question that we as lawyers drafting these contracts face is to apportion risks between parties if such events happening. We consider how much pressure can be put on suppliers in terms of warranties and obligations if their AI makes mistakes.”

Another issue around AI fallibility is that of ‘causation’, or tracing the reason behind the technical glitch. Bryson explains this further: “Effectively these models are black boxes. They train on vast amounts of data that will enable AI to make its decisions and predictions, but you cannot tie a particular outcome with a particular input or dataset. Who should accept fault, when no one really knows what caused the problem, is a thorny issue in contractual drafting.”

Applications for Bird & Bird’s 2026 Training Contract are now open

Using ChatGPT wisely

Problem areas do not end here. Generative AI is often questioned from an intellectual property (IP) perspective too. “There are big questions around ownership of the outputs of generative AI systems,” says Bryson. “Parties are used to using contracts to allocate ownership of intellectual property rights in outputs from a service, but where those outputs are created by an AI there may not be any IP to own! If there is nothing to own, what protections should be built into the contract for you is another question that we address for our clients.”

Amidst concerns around data privacy and confidentiality, Bryson is quite hopeful about the future of such tools. “Lawyers have been putting our data into computers and IT systems for decades, so this is not a novel problem at all. It’s about whether we are conducting this exercise in a safe manner,” he says. “There are concerns as to whether the data you feed into the system is being re-used (for example, for further training the system) and so could be disclosed to third parties. Providers of AI solutions clearly recognise this concern and many versions now allow you to ‘opt out’ from your data being reused. This should hopefully take care of some of these confidentiality-related concerns.”

Commercial awareness and careers advice

Ahead of his appearance this afternoon’s Legal Cheek event, Bryson also shares his top tips for students interested in the tech space. “Boosting your commercial awareness is a great way to demonstrate your interest in this area,” Bryson says. “I would encourage students to follow news publications around technology as the landscape changes very quickly. I have built some news reading time into my daily schedule, where I would read from sources like arsTechnica, Financial Times and the Wired magazine. Newsletters like that of Benedict Evans are also a great place to follow interesting trends.”

Alongside developing commercial awareness, Bryson also advises students to be passionate about the field. “Eventually, your enthusiasm is going to shine through at assessment centres,” he says. “Firms love to see candidates who have researched them and know where their strategies lie. But that’s not enough. If you want to really stand out from the crowd, you must also make a case about how your passion and ambition align with that of the firm you are applying to.”

Applications for Bird & Bird’s 2024 Spring and Summer Vacation Schemes are now open

Will Bryson will be speaking at ‘ChatB&B: The Power of AI in Law – with Bird & Bird’, a virtual student event taking place THIS AFTERNOON (Tuesday 17 October). This event is now fully booked. Check out our upcoming fairs and student events.

About Legal Cheek Careers posts.

The post How I help clients navigate the world of AI appeared first on Legal Cheek.

]]>
Navigating bias in generative AI https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/ https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/#comments Mon, 11 Sep 2023 08:22:18 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=192724 Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

The post Navigating bias in generative AI appeared first on Legal Cheek.

]]>

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

While the world lauds the latest developments in artificial intelligence (AI) and students celebrate never having to write an essay again without the aid of ChatGPT, beneath the surface, real concerns are developing around the use of generative AI. One of the biggest is the potential for bias. This specific concern was outlined by Nayeem Syed, senior legal director of technology at London Stock Exchange Group (LSEG), who succinctly warned, “unless consciously addressed, AI will mirror unconscious bias”.

 In terms of formal legislation, AI regulation differs greatly around the world. While the UK has adopted a ‘pro-innovation approach’, there still remain concerns around bias and misinformation.

Elsewhere, the recently approved  European Union Artificial Intelligence Act (EU AI Act) will be seen as the first regulation on artificial intelligence. This is expected to set the standard for international legislation around the world, similar to what occurred with the EU’s General Data Protection Regulation (GDPR). The AI Act incorporates principles that will help reduce bias, such as training data governance, human oversight and transparency.

In order to really understand the potential for bias in AI, we need to consider the origin of this bias. After all, how can an AI language model exhibit the same bias as humans? The answer is simple. Generative AI language models, such as OpenAI’s prominent ChatGPT chatbot, is only as bias-free as the data it is trained on.

Why should we care?

Broadly speaking, the process for training AI modes is straightforward. AI models learn from diverse text data collected from different sources. The text is split into smaller parts, and the model predicts what comes next based on what came before by learning from its own mistakes. While efforts are made to minimise bias, if the historical data that AI is learning from contains biases, say, systemic inequalities present in the legal system, then AI can inadvertently learn and reproduce these biases in its responses.

In the legal profession, the ramifications of these biases are particularly significant. There are numerous general biases AI may display related to ethnicity, gender and stereotyping, learned from historical texts and data sources. But in a legal context, imagine the potential damage of an AI system that generated its responses in a manner which unfairly favours certain demographics, thereby reinforcing existing inequalities.

One response to this argument is that, largely, no one is advocating for the use of AI to build entire arguments and generate precedent, at least not with generative AI as it exists in its current form. In fact, this has been shown to be comically ineffective.

So how serious a threat does the potential for bias actually pose in more realistic, conservative uses of generative AI in the legal profession? Aside from general research and document review tasks, two of the most commonly proposed, and currently implemented, uses for AI in law firms are client response chatbots and predictive analytics.

In an article for Forbes, Raquel Gomes, Founder & CEO of Stafi – a virtual assistant services company – discusses the many benefits of implementing automated chatbots in the legal industry. These include freeing up lawyers’ time, reducing costs and providing 24/7 instant client service on straightforward concerns or queries.

Likewise, predictive analytics can help a solicitor in building a negotiation or trial strategy. In the case of client service chatbots, the dangers resulting from biases in the training data is broadly limited to inadvertently providing clients with inaccurate or biased information. As far as predictive analysis is concerned, however, the potential ramifications are much wider and more complex.

Want to write for the Legal Cheek Journal?

Find out more

An example

Let’s consider a fictional case of an intellectual property lawyer representing a small start-up, who wants to use predictive analysis to help in her patent infringement dispute.

Eager for an edge, she turns to the latest AI revelation, feeding it an abundance of past cases. However, unknown to her, the AI had an affinity for favouring tech giants over smaller innovators as its learning had been shaped by biased data that leaned heavily towards established corporations, skewing its perspective and producing distorted predictions.

As a result, the solicitor believed her case to be weaker than it actually was. Consequently, this misconception about her case’s strength led her to adopt a more cautious approach in negotiations and accept a worse settlement. She hesitated to present certain arguments, undermining her ability to leverage her case’s merits effectively. The AI’s biased predictions thus unwittingly hindered her ability to fully advocate for her client.

Obviously, this is a vastly oversimplified portrayal of the potential dangers of AI bias in predictive analysis. However, it can be seen that even a more subtle bias could have severe consequences, especially in the context of criminal trials where the learning data could be skewed by historical demographic bias in the justice system.

The path forward

 It’s clear that AI is here to stay. So how do we mitigate these bias problems and improve its use? The first, and most obvious, answer is to improve the training data. This can help reduce one of the most common pitfalls of AI: overgeneralisation.

If an AI system is exposed to a skewed subset of legal cases during training, it might generalize conclusions that are not universally applicable, as was the case in the patent infringement example above. Two of the most commonly proposed strategies to reduce the impact of bias in AI responses are: increasing human oversight and improving the diversity of training data.

Increasing human oversight would allow lawyers to identify and rectify the bias before it could have an impact. However, easily the most championed benefit of AI is that it saves time. If countering bias effectively necessitates substantial human oversight, it reduces this benefit significantly.

The second most straightforward solution to AI bias is to improve the training data to ensure a comprehensive and unbiased dataset. This would, in the case of our patent dispute example, prevent the AI from giving skewed responses that leaned towards established corporations. However, acquiring a comprehensive and unbiased dataset is easier said than done, primarily due to issues related to incomplete data availability and inconsistencies in data quality.

Overall, while a combination of both these strategies would go a long way in mitigating bias it still remains one of the biggest challenges surrounding generative AI. It’s clear that incoming AI regulation will only increase and expand in an attempt to deal with a range of issues around the use of this rapidly rising technology. As the legal world increases its use of (and reliance on) generative AI, more questions and concerns will undoubtedly continue to appear over its risks and how to navigate them.

Charlie Downey is an aspiring solicitor. He is currently a third-year philosophy, politics and economics student at the University of Nottingham.

The post Navigating bias in generative AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/navigating-bias-in-generative-ai/feed/ 2
High-flying law students perform worse in exams when using AI https://www.legalcheek.com/2023/09/high-flying-law-students-perform-worse-in-exams-when-using-ai/ https://www.legalcheek.com/2023/09/high-flying-law-students-perform-worse-in-exams-when-using-ai/#comments Mon, 04 Sep 2023 10:20:52 +0000 https://www.legalcheek.com/?p=192721 Lower performers saw significant benefit

The post High-flying law students perform worse in exams when using AI appeared first on Legal Cheek.

]]>

Lower performers saw significant benefit

The grades of high achieving law students suffer when they are given access to artificial intelligence tools, new research has found. But low performers saw a significant increase in performance.

Forty-eight law students at the University of Minnesota were given a final paper without AI, and then, after prompt training, were given a second paper and permitted to use GPT-4, OpenAI’s latest and most advanced AI model.

On average, AI-powered students scored 29% better than when in mere human form. For lower performing students this improvement was 45%, although for those at the top of their class scores dropped by a whopping 20%. Whilst the AI could boost multiple choice scores, its essay writing wasn’t quite up to scratch.

The 2023 Legal Cheek Law Schools Most List

Commenting on the results in their paper, professors Choi and Schwarcz said “GPT-4’s impact depended heavily on the student’s starting skill level…This suggests that AI may have an equalizing effect on the legal profession, mitigating inequalities between elite and nonelite lawyers”.

Earlier this month, Legal Cheek reported on another study suggesting that AI will lead to more legal work being done by those without traditional qualifications, with the profession opening up to experts in the computing and coding fields.

Many firms have already sought technological solutions to expedite more administrative processes like document review, freeing up time, and allowing the human machines to work on more complex issues.

The post High-flying law students perform worse in exams when using AI appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2023/09/high-flying-law-students-perform-worse-in-exams-when-using-ai/feed/ 5
Research: AI will lead to more legal work being done by those without ‘traditional qualifications’ https://www.legalcheek.com/2023/08/research-ai-will-lead-to-more-legal-work-being-done-by-those-without-traditional-qualifications/ https://www.legalcheek.com/2023/08/research-ai-will-lead-to-more-legal-work-being-done-by-those-without-traditional-qualifications/#comments Wed, 23 Aug 2023 07:45:28 +0000 https://www.legalcheek.com/?p=192138 Concerns remain over regulation

The post Research: AI will lead to more legal work being done by those without ‘traditional qualifications’ appeared first on Legal Cheek.

]]>

Concerns remain over regulation

New research has shown that three-quarters of UK lawyers believe artificial intelligence (AI) will lead to an increase in the amount of legal work carried out by individuals without “traditional legal qualifications”.

The research, undertaken by Thomson Reuters, collected insights from 1,200 legal professionals across the UK, US, Canada and South America.

The data shows that as AI gathers pace in the legal sector, an increasing number of firms are looking to widen their hiring criteria to recruit those with backgrounds in maths and computer science, who are already accustomed to working with AI.

In fact, technology is becoming so common in the legal sector that nine out of ten respondents said that they expect mandatory AI training to be introduced within the next five years.

The 2023 Legal Cheek Firms Most List

The overall vibe towards the roll-out of AI seems to be good so far, with more than half of lawyers surveyed (58%) feeling positive about the prospect of it being introduced to the workspace.

The Legal Cheek Journal recently explored the possibility of AI creating more access to justice and equality in the workplace, and 81% of UK respondents agree that artificial intelligence will improve gender, ethnicity and socio-economic diversity.

Kriti Sharma, chief product officer for legal technology at Thomson Reuters, said:

“AI will have a potentially transformative impact on the legal profession, leading to an evolution in traditional career paths, skills sets and points of entry, as well as driving diversity and access.”

The roll-out of new technology is expected to free-up more time for lawyers to focus on more complex, nuanced work that adds value for clients. But the big question is, who will regulate it?

There remains concern over what happens when AI goes wrong, and who takes the blame. Almost half (49%) of UK lawyers who responded to the survey believed the legal profession should self-regulate the use of AI, whilst 40% think the government should take responsibility for it.

With the EU’s Artificial Intelligence Act not expected until 2025, and the UK’s AI Rulebook still undergoing review, the debate over AI regulation rages on.

The post Research: AI will lead to more legal work being done by those without ‘traditional qualifications’ appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2023/08/research-ai-will-lead-to-more-legal-work-being-done-by-those-without-traditional-qualifications/feed/ 2
Improving access to justice – is AI the answer? https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/ https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/#respond Mon, 21 Aug 2023 07:37:45 +0000 https://www.legalcheek.com/?post_type=lc-journal-posts&p=191303 Jake Fletcher-Stega, a recent University of Liverpool law grad explores the potential for technology to enhance legal services

The post Improving access to justice – is AI the answer? appeared first on Legal Cheek.

]]>

Jake Fletcher-Stega, a recent University of Liverpool law grad, explores the potential for technology to enhance legal services

Utilising advancements like artificial intelligence (AI) and chatbots in the UK can greatly boost efficiency and accessibility in the legal system. Legal tech has the potential to substantially elevate the quality of legal services, prioritising client outcomes over traditional methods, which is crucial for advancing the legal field.

Inspired by Richard Susskind’s work (a leading legal tech advisor, author and academic), this article seeks to demonstrate AI’s potential to spearhead advancements in the legal field and provide solutions to the issue of court backlogs currently plaguing the UK system.

 The problem: the overloaded UK court system

Despite our faith in the right to access to justice as a cornerstone of the British legal framework, the reality is that this is far less certain than might appear. Briefly put, access to justice is the ability of individuals to assert and safeguard their legal rights and responsibilities. In 2012, the Legal Aid, Sentencing and Punishment of Offenders Act (LASPO) significantly reduced funding for the UK justice system, resulting in a current backlog of approximately 60,000 cases and leaving many unable to afford representation.

If we are to fix this ongoing crisis, a fresh, unique, and revolutionary solution is required. I suggest that adopting an innovative approach, such as the use of legal technology, could significantly improve access to justice.

 The solution: legal tech

To echo the view of leading academic Susskind, the legal delivery service is outdated and overly resistant to technological advancements. He asserts that the utilisation of artificial intelligence, automation, and big data has the potential to revolutionise the methods through which legal services can be provided and executed. I must reiterate, it isn’t beneficial that our legal sector is overly conservative and technophobic. Other professions have moved forward with technology, but lawyers haven’t.

Lawyers are behind the curve when compared to other sectors such as finance and medicine who are now utilising technology such MicrosoftInnerEye. Law isn’t significantly different from medical and financial advice. Not different enough to deny the value of innovating our legal services.

The belief that the legal field cannot innovate in the same way as other industries due to its epistemological nature is a common misconception. Many argue that AI will never fully replicate human reasoning, analysis, and problem-solving abilities, leading to the assumption that it cannot pose a threat to human professionals whose job primarily involves ‘reasoning’. However, this perspective is flawed.

While AI may not operate identically to humans, its capability to perform similar tasks and achieve comparable outcomes cannot be underestimated. Instead of fixating on the differences in the way tasks are accomplished, we should shift our focus to the end result.

Embracing AI/Legal Tech and its potential to augment legal services can lead to more efficient, accessible, and effective outcomes for clients, without entirely replacing the valuable expertise and experience that human professionals bring to the table. It is by combining the strengths of AI with human expertise that we can truly revolutionise the legal sector and improve access to justice for all.

Want to write for the Legal Cheek Journal?

Find out more

Outcome thinking

As lawyers, we must begin to approach the concept of reform in law through the notion of ‘outcome thinking’. In outcome thinking, the emphasis is on understanding what clients truly want to achieve and finding the most effective and efficient ways to deliver those outcomes. The key idea is that clients are primarily interested in the results, solutions, or experiences that a service or product can provide, rather than the specific individuals or processes involved in delivering it.

For example, instead of assuming that patients want doctors, outcome thinking suggests that patients want good health. Another example is the creation of this article. I used AI tools to help me adjust the language, structure and grammar of this text to make it a smoother read. This is because ultimately as the reader you are only interested in the result and not how I crafted this text.

Lawyers are getting side-tracked

Lawyers fail to grasp the focus of this discussion. To illustrate this, let me share a personal story. Just moments before my scholarship interview at one of the Inns of Courts, I was presented with two statements and asked to argue for or against one of them within a five-minute timeframe. One of the statements posed was, ‘Is AI a threat to the profession of barristers?’ Instead of taking sides, I chose to argue that this question was fundamentally flawed.

My contention was that the more critical consideration should be whether new technology can enhance efficiency in the legal system, leading to more affordable and accessible access to justice. The primary focus of the law should be to provide effective legal services rather than solely securing an income for barristers, just as the priority in medicine is the well-being of patients, not the financial gains of doctors.

When a new medical procedure is introduced, the main concern revolves around its impact on patients, not how it affects the workload of doctors. Similarly, the legal profession should prioritise the interests of those seeking justice above all else.

One example — chatbots

One practical example of legal tech that Susskind suggests is the implementation of a ‘diagnostic system’. This system uses an interactive process to extract and analyse the specific details of a case and provide solutions. This form of frontline service technology is often provided through the medium of a chatbot. As a chatbot can work independently and doesn’t require an operator, it has the potential to streamline legal processes.

To test this, I developed a prototype application that demonstrated the potential of AI to tackle legal reasoning. Using the IBM Watson Assistant platform and academic theory from Susskind & Margaret Hagan, I created a chatbot that assisted a paralegal in categorising a client’s case. Although far from perfect, the project proved that AI can substantially improve the efficiency and quality of our outdated legal services.

Concluding thoughts

This article has attempted to demonstrate how embracing technological innovation can revolutionise the legal profession. By focusing on delivering efficient and client-centric outcomes, the legal sector can improve access to justice and create a more effective system. While challenges exist, proactive adoption of innovative solutions will shape a promising future for law, ensuring its continued role in upholding justice for all.

Jake Fletcher-Stega is an aspiring barrister. He recently graduated from the University of Liverpool and his research interests lie in legal tech and AI.

The post Improving access to justice – is AI the answer? appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/lc-journal-posts/improving-access-to-justice-is-ai-the-answer/feed/ 0
Solicitor aims to become first ‘AI-powered MP’ https://www.legalcheek.com/2023/07/solicitor-aims-to-become-first-ai-powered-mp/ https://www.legalcheek.com/2023/07/solicitor-aims-to-become-first-ai-powered-mp/#comments Wed, 19 Jul 2023 08:17:40 +0000 https://www.legalcheek.com/?p=189003 Andrew Gray hopes tech-focused approach will lead to election success

The post Solicitor aims to become first ‘AI-powered MP’ appeared first on Legal Cheek.

]]>
Andrew Gray hopes tech-focused approach will lead to election success

Andrew Gray

A Yorkshire solicitor is attempting to become the world’s first-ever ‘AI-powered’ member of parliament.

Andrew Gray is an independent candidate in the upcoming Selby and Ainsty by-election and has modelled his policies on the crowdsourced views of his potential constituents using an AI system called Polis.

The creators of Polis describe it as a “real-time system for gathering, analysing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learning”.

Gray qualified as a solicitor in 2007 and later founded Yorkshire law firm Truth Legal. He’s also a former president of the Harrogate and District Law Society.

His campaign website declares: “My only policy is to take my policies from the people!” It contains a manifesto built around the views expressed by 7,500 votes from Selby and Ainsty constituents.

The 2023 Legal Cheek Firms Most List

Each policy listed details the percentage of votes it gained to enter the manifesto, topics ranging from taxation and re-nationalising the Bank of England to the UK’s relationship with the EU.

“This is what representatives are meant to do,” he says of the techy-approach to politics. “I intend to take democracy back to its roots, starting here, in North Yorkshire.”

If elected, Gray plans to continue using online polls to generate data to influence the way he would vote in parliament as well as which causes to vote on.“Together, we can shake the very foundations of Westminster with 100,000 people storming through the lobbies of parliament every time I cast my vote on your behalf,” he says in his manifesto.

The Selby and Ainsty by-election will take place on Thursday. It was triggered after Conservative MP Nigel Adams resigned in June.

The post Solicitor aims to become first ‘AI-powered MP’ appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2023/07/solicitor-aims-to-become-first-ai-powered-mp/feed/ 2
‘Future lawyers need to be versatile and willing to innovate’ https://www.legalcheek.com/lc-careers-posts/future-lawyers-need-to-be-versatile-and-willing-to-innovate/ Mon, 17 Jul 2023 13:30:47 +0000 https://www.legalcheek.com/?post_type=lc-careers-posts&p=188941 As technology continues to transform the legal industry, we chat to Cemile Cakir, head of online postgraduate academic courses at ULaw, for her take on how students should engage with it

The post ‘Future lawyers need to be versatile and willing to innovate’ appeared first on Legal Cheek.

]]>
As technology continues to transform the legal industry, we chat to Cemile Cakir, head of online postgraduate academic courses at ULaw, for her take on how students should engage with it

As technology continues to transform the legal industry questions abound as to how students should engage with it and develop the commercial awareness that is central to training contract applications.

We caught up with Cemile Cakir, head of online postgraduate academic courses at The University of Law (ULaw), for her take on all-things tech, including the key trends among law firms and the skills future lawyers need to succeed.

Can you tell us a bit about your background and your current role at ULaw?

I am a qualified solicitor, and I joined the University (then College of Law) at Moorgate approximately 15 years ago. During this time, as a senior tutor and teaching fellow, I taught across many programmes, completed an MA(Dist) in Education Technology and co-founded the ULaw Tech Research Academy (ULTRA) in 2018. As part of ULTRA, I became the integration lead — education technology and technical lead, where ULaw embedded legal technology and digital skills within our academic and vocational courses, and I later became one of the module leads for the MsC in Legal Technology produced in 2019. I am now head of online postgraduate academic courses at the online campus which opened in 2020 and we have this academic year launched the ‘Legal Tech Hub — Online Campus’.

Cemile Cakir, head of online postgraduate academic courses at ULaw

What are the key trends you see in law firms’ use of legal technology and how do you see them further incorporating these in their practice going forward?

Currently the main drivers for the incorporation of legal tech are the delivery of quality legal services and business efficiency. Some firms embrace legal tech and its possibilities whilst others haven’t quite integrated it into the business. Legal tech and innovation require significant investment not only in the technology but its incorporation into the business and training of its people. Integration of legal tech is currently reliant on the ‘top-down’ business model which depends on senior leaders’ adoption and commitment to it. Eventually, as legal tech becomes more sophisticated and affordable, professionals in legal services will recognise the benefits of legal tech and begin to employ it in their day-to-day roles as a force that is ‘bottom-up’. Generative AI (artificial intelligence) is one such example of an awakening to tech’s potential. As we increasingly appropriate AI as the norm, legal tech will simply be ‘tech’, a sophisticated tool used in the delivery of legal services. Lawyers’ jobs will be different and new roles have and will continue to emerge, such as the legal technologist. I believe further momentum will come with the growth of future lawyers.

Building on that, what skills do future lawyers need to succeed in the current legal climate?

Future lawyers will need to be versatile and have a willingness to innovate and think laterally. This requires a shift in culture that is willing to embrace change, alternative approaches to problem solving and the exploring of solutions. Legal knowledge and legal reasoning will still be important but design thinking, process mapping and a creative growth mindset will be key. An understanding of technology to better contribute and work in multi-disciplinary teams (STEAM — Science, Technology, Engineering, Arts and Mathematics) which include programme developers and data experts will be the norm. Whilst lawyers do not need to code, they will need to appreciate the different types and workings of legal tech. They need to break down processes, algorithms and legal reasoning in a way that can translate into technology. They need to be empowered to have the knowledge and confidence to analyse, evaluate, challenge and make informed decisions around the generated outputs of AI, to ensure its appropriate and ethical use. Future lawyers need to be in command of ethical and accountable legal tech that can be trusted.

Find out more about studying the MsC Legal Technology at The University of Law

Let’s talk a bit about legal tech within ULaw’s online campus — how are you making sure that students have a solid grounding in tech and innovation?

The online campus launched the ‘Legal Tech Hub — Online Campus’ this academic year in September 2022 with a view to expand every student’s opportunities in tech and innovation, across the different programmes of study. We have had over 543 students register for our events in the Legal Tech Speaker Series, the Legal Tech Sandbox and Legal Tech Employability.

We have collaborated with various stakeholders including Bryter, Clifford Chance, Macfarlanes, Addleshaw Goddard, Kennedys, Freshfields, Eversheds, Santander and TLT to bring the latest developments and insights in legal tech and practice to our future lawyers. We have had excellent student feedback on the coverage of, “Interesting topics that will be super useful in the future” that “…showed the difference that legal technology has made to everyday life as a lawyer…” The hands-on opportunity in the Sandbox events were “very interactive and engaging…” and have inspired growth and enthusiasm for legal tech. “I felt overwhelmed at the thought of the practical but loved every minute”, said a participant. Students are realising and taking up career opportunities: “It has really sparked my interest in legal tech design”, said another. The ‘Legal Tech Hub – Online Campus’ has been a great success.

ULaw offers a Master’s in Legal Technology — why should students look to doing such courses to prepare them for a legal career and what opportunities flow from a qualification as such?

The Master’s in Law is an award that offers a specialism, in Legal Technology. Future lawyers who wish to lead the industry in the innovation, adoption and integration of legal technology can build their expertise with a more in-depth and focused programme of study. Students learn about the functionality of AI and blockchain and use this to diagnose problems and solutions in legal practice through ideating and design thinking. Through the different modules, the students can enrich their knowledge and understanding of data and cyberlaws, the wider issues relating to the Internet of Things, corporate governance and ethics. They are able to select their elective modules to best direct their learning to cover subjects of interest and their own career path.

To finish off, if you could create one piece of legal technology to make your life or the lives of lawyers more efficient, what would it be, and why?

I would create legal technology that would serve not only lawyers but everyone. Since the reduction of legal aid in this country, the ‘legal health’ of the public is poor. People do not know their legal rights and have limited access to legal services, thus stumbling through legal problems. The Pro Bono service at ULaw offers a number of legal clinics across a breadth of practice areas and has noticed a significant increase in the number of vulnerable clients needing access to legal services. Currently pro bono services and charities work tirelessly to fill this gap but the support of AI tools for basic legal guidance and education would be invaluable. I would like to see AI support the people via perhaps a chatbot that can offer reliable and accurate guidance to support individuals and services to help mitigate the pain relating to basic but fundamental legal matters to fill this void. Issues such as homelessness, debt, asylum, slavery etc. would be examples of such areas to prioritise.

Join us this afternoon (Monday 17 July) for a legal tech special edition of our ‘Secrets to Success’ event series in partnership with The University of Law. The event, which will be held virtually, features a panel of lawyers and legal tech experts from Allen & Overy, Macfarlanes and Osborne Clarke, as well as an expert in legal education from ULaw. Apply for one of the final few places to attend the event, which is free, now.

Find out more about studying the MsC Legal Technology at The University of Law

About Legal Cheek Careers posts.

The post ‘Future lawyers need to be versatile and willing to innovate’ appeared first on Legal Cheek.

]]>
Generative AI won’t disrupt law until late 2020s and beyond, says Susskind https://www.legalcheek.com/2023/07/generative-ai-wont-disrupt-law-until-late-2020s-and-beyond-says-susskind/ https://www.legalcheek.com/2023/07/generative-ai-wont-disrupt-law-until-late-2020s-and-beyond-says-susskind/#comments Thu, 06 Jul 2023 08:15:55 +0000 https://www.legalcheek.com/?p=188646 Relax aspiring lawyers, you’ve got at least five years…

The post Generative AI won’t disrupt law until late 2020s and beyond, says Susskind appeared first on Legal Cheek.

]]>
Relax aspiring lawyers, you’ve got at least five years…

New ChatGPT-level artificial intelligence will profoundly change legal practice in ways it’s still hard to imagine, says top legal futurologist Professor Richard Susskind, but in the short term could prove a damp squib.

In a series of notes released yesterday evening, Susskind states:

“I believe that most of the short-term claims being made about [generative AI’s] impact on lawyers and the courts hugely overstate its likely impact.”

But the flipside to that is rapid change in the future. The law and tech expert, who is an honorary Kings Counsel, adds:

“On the other hand, I think that most of the long-term claims hugely understate its impact. AI will not transform legal and court service within the next two years but it will do so, in my view, in the late 2020s and beyond. There is much to be done in the meantime but the change will be incremental rather than in one big bang.”

Susskind is known for advising the senior judiciary and elite law firms about the impact of technology on the law. He’s something of a pioneer in this field, having done a PhD about AI and the law at the University of Oxford in the 1980s and co-developed the first commercial AI system for lawyers. Since then he has written many books, including the recent Tomorrow’s Lawyers: An Introduction To Your Future.

The 2023 Legal Cheek Firms Most List

A theme of Susskind’s work is how technological evolution should be thought of not in terms of “swapping machines and lawyers” but rather “in using AI to deliver client outcomes in entirely new ways”.

Another focus is looking at things from the perspective of the client rather than the lawyer. Susskind explains:

“It is interesting that most of the commentary on the impact of AI on the law has focused on what this means for lawyers and judges. In medicine, when there is a new drug or procedure, the discussion does not focus on what this means for doctors. Lawyers and the media would do well to ask more often what generative AI mean for access to justice and for the client community generally.”

Read Professor Richard Susskind’s ‘Six thoughts on AI’ in full below

1. Although ChatGPT is the most remarkable development I have seen in AI in over 40 years, I believe that most of the short-term claims being made about its impact on lawyers and the courts hugely overstate its likely impact. On the other hand, I think that most of the long-term claims hugely understate its impact. AI will not transform legal and court service within the next two years but it will do so, in my view, in the late 2020s and beyond. There is much to be done in the meantime but the change will be incremental rather than in one big bang. There will, I expect, be no single Uber or Amazon or eBay in law. Instead, the transformation will result from a combination of innovations across our system.

2. ChatGPT and generative AI are significant not for what they are today (mightily impressive but sometimes defective) but for what later generations of these systems are likely to become. We are still at the foothills. But the pace of change is accelerating and we can reasonably expect increasingly more capable and accurate systems. In the long run, AI systems will be unfathomably capable and outperform humans in many if not most activities. Whether this is desirable is another issue and is the focus of the current debate on the ethics and regulation of AI.

3. ChatGPT and generative AI are the latest chapter in law of an ongoing story that stretches back as far as 1960. The latest systems do not replace older AI systems (such as expert systems and earlier predictive systems). Nor are they the endgame. The field of AI in law is and will continue to be made up of a cumulative series of techniques, the most recent of which is generative AI. Expect great further advances in the coming years. But expect them to come more quickly than in the past. One of the most interesting breakthroughs for the law will be systems that systematically ask their users questions – to help pin down and actually categorise and classify the problems or issues on which they want guidance.

4. According to Ray Kurzweil (in my view, the most prescient of futurists), the performance of neural networks (the technology that underlies most current AI systems) is doubling every 3.5 months, which will mean a 300,000 fold increase in six years. The enabling technologies are clearly advancing at an accelerating and mind-boggling pace and are attracting huge investment. Which is why I say we should expect great further advances. There is no apparent finishing line.

5. It is interesting that most of the commentary on the impact of AI on the law has focused on what this means for lawyers and judges. In medicine, when there is a new drug or procedure, the discussion does not focus on what this means for doctors. Lawyers and the media would do well to ask more often what generative AI mean for access to justice and for the client community generally.

6. In the long run, the greatest impact of AI on the law will not be in simply automating or replacing tasks currently undertaken by human lawyers. Using a health analogy again, to focus only on task substitution, as economists would call it, is to think that the future of surgery lies exclusively in robotic surgery – automating and replacing human surgical work. But the greater long-term impact in that field is likely to lie in non-invasive therapies and preventative medicine. So too in law, the most exciting possibilities lie not in swapping machines and lawyers but in using AI to deliver client outcomes in entirely new ways – for example, through online dispute resolution rather than physical courts and, more fundamentally, through dispute avoidance rather than dispute resolution.

The post Generative AI won’t disrupt law until late 2020s and beyond, says Susskind appeared first on Legal Cheek.

]]>
https://www.legalcheek.com/2023/07/generative-ai-wont-disrupt-law-until-late-2020s-and-beyond-says-susskind/feed/ 1