Hello and welcome back to the Pinsent Masons podcast, where we keep you up to date with the most important developments in global business law every fortnight. I'm Matthew Magee and I'm a journalist here at Pinsent Masons. This week we have an analysis of the UK's first post-Brexit data protection law reform, and we hear about the dangers of the use of AI in litigation without proper checks. But first, here's some business law news from around the world. UK tenure infrastructure strategy is a welcome attempt to address challenges. Expanded Hong Kong worker rights require a shift in employer approach, and Luxembourg carried interest in special tax reform is good news for business.
A new UK government strategy setting out plans for investment in infrastructure over the next 10 years is a meaningful attempt to address the challenges of infrastructure delivery, an expert has said. Robbie Owen, infrastructure policy and planning expert at Pinsent Masons, was commenting following the publication of the government's 10-year Infrastructure strategy, which acknowledged that the UK has fallen behind in infrastructure quality and productivity. Owen said the challenges posed to UK infrastructure in recent years, from short-termism in funding and political decision-making to burdensome planning and consenting processes and the skill shortage, have been well documented. The publication of a long-term infrastructure strategy represents the government's welcome attempt to move the dial and is a product of considerable thinking, planning and work.
Hong Kong's Legislative Council's amendments to employment laws require a structural shift in how business must monitor staff working hours, an expert had said. The amendment, which received overwhelming support from the Legislative Council, lowers the threshold for part-timer contract employees to qualify for proper benefits such as statutory holidays and paid sick leave. Under the new rule, employees who work at least 4 consecutive weeks for at least 18 hours a week will now be entitled to the same benefits as full-time staff. Mohammed Talib said these changes mark a significant expansion in entitlements for part-time employees and impose new compliance obligations on businesses, especially those with flexible or casual staffing models. This requires a shift in how staff working hours are monitored so that part-time and contract staff don't become treated as full-time employees.
Luxembourg plans to introduce a modern carried interest regime in a bid to attract alternative investment fund managers, alongside a special tax regime for start-up employees stock options in good news for the country experts have said. Carried interest, a share of profits earned by fund managers, has long been a contentious issue in European tax policy. Luxembourg's announcement aligns it with other leading financial centres that offer favourable treatment for this type of income. The new regime is designed to attract fund managers by offering a clear and competitive tax structure, aiming to reinforce Luxembourg's appeal as a base for private equity and venture capital operations.
It's nearly 10 years since UK voters decided the country would leave the EU amidst, among other things, promises of red tape slashed, businesses set free and regulation gutted. Even the most ardent Brexiteer would admit that the reality of regulatory reform has stopped some ways short of some of the campaign trail claims and predictions. But one piece of legislation has made it through successive governments in various forms, a reform of data protection laws aimed at making digital business simpler and easing the flow of valuable data from one company to another. It finally became law last week after a governmental gestation lasting some years and three prime ministers. Belfast-based technology law expert Anna Flanagan told me about the genesis of the Data Use and Access Act.
Anna Flanagan: The Data Use and Access Act is a new data protection framework in the United Kingdom. It became law on the 19th of June after a slightly protracted period working its way through Parliament. The way it operates is to effectively amend the UK TDPR and the Data Protection Act 2018 by adding additional sections to some quite specific areas. Post-Brexit, the UK government wished to liberalise data protection laws, in theory to make British businesses more efficient and to get rid of some perceived red tape around how personal information is held and processed. That started out under Liz Truss' government as the Data Protection and Digital Information Bill, the DPDI Bill, but didn't make it through Parliament before Rishi Sunak's government took over. The Sunak government then proposed some additional changes under the DPDI Bill number 2 but that also didn't quite make its way through the parliamentary process in time, and therefore the Labour government took it back up. But the changes in the Knight Act are not as extensive as initially proposed, so the perceived red tape cutting hasn't necessarily quite gone to plan in terms of what was originally anticipated at the very start. Matthew Magee: One element of data protection laws that might have seemed a bit niche and technocratic a few years ago, but which has been given new prominence by the advances of AI, is automated decision-making. Anna says companies will be able to make greater use of it when the law is implemented. Anna: Automated processing is effectively where decisions are made without meaningful human involvement. So that's when a computer makes a decision. At the moment there is a prohibition under Article 22 of EU GDPR on solely automated decisions that have legal or similarly significant effects on individuals. So that means that if the decision you make, for example whether or not to offer someone a mortgage, you can't necessarily make a decision in an automated way without some human involvement. That prohibition has been relaxed in so far as it relates to strictly personal information. So that's not special category information about health, ethnicity, whether or not you're a member of a trade union. Any automated decision-making based on special category data remains prohibited, but it is acceptable now where it is not special category data to make an automated decision under the new legislation. This obviously will give businesses greater flexibility in terms of using things like AI in their decision-making processes. And again, throughout the parliamentary process, it caused some concern among some of the members of parliament around enabling the use of AI for this kind of thing. Concerned that if the AI gets it wrong, it's difficult to push back against that.
Matthew: People are rightly sensitive about one company sharing their personal data with another company, something that data protection laws control. But governments know that citizens could have access to more suitable or more affordable commercial services if their data could be accessed by a greater number of companies. So it's trialling some limited data liberalisation with this new law.
Anna: So, this is the idea that data held about a customer by one business should be made available to other businesses to use so that they can offer new products and services to those customers to promote competition and innovation in markets. And it's not a particularly new concept. Obviously, we've all seen it and probably many of us have used it in terms of things like open banking. But what the Data Use and Access Bill does is increase the concept of open banking to one more of open finance where things like pensions, insurance, investments come into play. Additionally, there are regulation powers within the DUA Act that will allow the government scope to extend it to other sectors. So things like energy and telecoms. It effectively makes it easier for customers to move around because their data can be ported in a very straightforward way for the customer.
Matthew: There are other elements of the act that will affect business: a new kind of flexible consent and scientific research. That means that companies in research-heavy industries such as life sciences won't have to keep going back to people to ask for consent if the focus of a research project changes. And there are some changes to levels of fines for failure to properly get consent for website cookies. But the degree of change is fairly muted, and that's not an accident. It's all to do with the European Union's power to declare the UK's data protection regime adequate or not, says Anna.
Anna: This relates to international transfers of personal information. Under the GDPR, both the EU and UK GDPR transfers of personal data to third countries are prohibited unless there's certain measures or conditions in place. One of those is where a third country has received an adequacy decision from the EU. This is where the EU says the way that third country looks after personal data is adequate in terms of security. And we therefore consider that data can flow freely without any other measures in place. It was a very topical point in the Brexit negotiation process because so many businesses need personal information to be able to transfer between the European Union and the UK very freely for their businesses to function. Getting the adequacy decision and now maintaining it is seen as really quite essential and that is probably one key factor in terms of how far the data protection changes have actually gone in this act because the UK government will have always had one eye on ensuring that adequacy status is maintained. The reforms are due to be scrutinised very shortly by EU policymakers and because the adequacy decision that was given post-Brexit is now up for review later this year, the European Commission will review whether or not there's any significant departure from the EU rules and provided that there's not, will then continue to give the UK adequacy.
Matthew: And will the UK get that adequacy ruling?
Anna: It seems pretty likely. The extent of the data protection reforms are not so extensive that there is much that has caused people concern that the adequacy status will not be granted.
We've all made mistakes at work, and it's mortifying. The more public, the more embarrassing. But what about when an AI system makes a mistake, and when a human using an AI system allows that mistake to make it into legal arguments in court? Well, it's a bit more than embarrassing. It can cast doubt on the administration of justice itself, according to a judge in a recent ruling which castigated users of AI in two cases where errors and outright fabrications were found in arguments presented to the court. It will have raised hackles and sensitivities about how, when and even if AI has a role to play in something like litigation where the stakes are extremely high. London-based litigation expert Lucia Doran told me what happened in these two cases.
Lucia Doran: The decision concerned 2 cases which were both heard together, given that both arose out of the actual or suspected use by lawyers of generative AI in the preparation of legal arguments or documents which are then not checked before being put before the court. So in the first case the statement of grounds had been settled by a junior barrister but this contained a misstatement of the law and also a number of citations had been included which did not exist and these errors were not corrected by the junior barrister or their instructing solicitors before being served on the other side. The barrister maintained that she did not use AI to carry out her research. And in the second case, a client had conducted legal research using generative AI and had provided this to solicitor and this research was then included in witness statements of both the client and the solicitor without the solicitor checking the research. And it then transpired that numerous authorities either did not exist, they didn't contain the passages that had been quoted or didn't support the propositions they were cited in support of. The court held in all of these cases that contempt proceedings did not need to be initiated. However, Mr. Justice Johnson stressed that in the case of the junior barrister in particular, the decision not to initiate contempt proceedings should not be seen as a precedent because it was fact specific and throughout the whole judgement a theme was that lawyers who do not comply with their professional obligations risk severe sanction.
Matthew: So what had actually gone wrong? What was the mistake that the AI users had made?
Lucia: So essentially what went wrong in these cases was that the lawyers used generative AI or were suspected of using generative AI and these outputs weren't checked before being included in legal documentation. I think the lesson that we can draw from this is that it's really important that people check the outputs of generative AI before using it in their work. You know, generative AI is obviously can be very useful as a starting point for research, but it should just be seen as a starting point. It's really important then that the outputs are checked and you continue as you would have done in the past using sort of practitioner books and law reports in order to check the research that AI has generated.
Matthew: The issue is so fraught because the stakes in litigation are so high, and so can the sanctions be, as Lucia explained.
Lucia: So the big risk of using AI is firstly that its outputs are inaccurate because it can hallucinate information or it may draw the wrong conclusions about the law. And so for lawyers in particular, using AI without checking its outputs puts you in breach of your professional obligations and refusing court documents, your duties to the court. And as explained by Mr. Justice Johnson in the case, the risk of this conduct carries could be sort of admonishment, referral to a regulator, the contempt proceedings, and in the most serious of cases, a referral to the police. So it's really important that the people in the legal profession understand the duties that they're under and know how to use AI responsibly and therefore check all of the outputs before including them in documents.
Matthew: Is there a remedy for rogue AI in litigation? There is, but it's hardly revolutionary. Better processes, scrutiny, oversight and care, according to the judge.
Lucia: Mr. Justice Johnson in the case stated that it's important for people within the legal profession that have leadership responsibilities and also those who regulate the provision of legal services that they take responsibility and effective measures to ensure that those providing legal services understand and comply with their duties to the court if using AI. So for example, if in terms of law firms or in house teams, ensuring that there's a sort of generative AI policy and training so that people understand exactly how to use AI, they know the risks associated with it and they know how to mitigate those risks. Organisations can also point their employees in the direction of pre-existing guidance on AI. So for example, the SRA have published a risk outlook report on the use of AI in the legal market. So I think education is really key in preventing situations like this happening in the future. And it's also important that supervisors of more junior members of the team ensure that their supervisees understand how to carry out legal research. So whilst they might use AI as a starting board, that they know that they have to check and verify the outputs using traditional and established legal resources.
Thanks again for listening to the Pinsent Masons Podcast, we appreciate every minute you spend with us. We know there's lots of demands on your time. Please do share it with anyone that you think might find it useful. And remember, you can find out what's happening today, whenever today is, by looking at pinsentmasons.com and following the content from our team of journalists around the world. Or you can get a weekly digest tailored to exactly your needs by signing up at pinsentmasons.com/newsletter. In the meantime, and until next time, thanks and goodbye.
The Pinsent Masons Podcast was produced and presented by Matthew Magee for international professional services firm Pinsent Masons.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.