Episode Transcript
[00:00:10] Speaker A: Hello and welcome to hacking Kaizen. I'm Graeme Newman. With me this week is Doctor Pirapak Shotsuwatana School distinguished lawyer, lecturer in law and economics at Tulalongkong University and managing partner at VA Partners. As we explore the transformative impact of generative AI on the legal profession, Doctor Pirapat shares his insights on the uncertainties that accompany this rapid technological evolution.
Despite being immersed in the advancement and theories surrounding AI, Pirapat candidly acknowledges the challenges in fully grasping its future implications. This conversation invites us to consider how the legal field must adapt, redesigning its instruments and approaches to a line with the new realities brought forth by artificial intelligence.
Beyond the realm of technology, we also reflect Tom Pirapat's significant contributions during his tenure at Thailand's Ministry of Commerce. Here, he played a pivotal role in negotiating crucial trade agreements with the European Union and UAE, shaping the future of Thailand's international trade landscape. He shares his experience from the frontlines of these high stake negotiations, emphasising the importance of balancing principles with practical trade offs. It's a world where every decision has far reaching consequences and the ability to navigate this complex terrain is crucial. Also highlighting the vital role of sustainable practices and green financing in the modern business world, Pirapat stresses that as Thailand pursues its ambitious goals for economic development and carbon neutrality, by 2050, companies must align with global sustainability trends. Compliance with emerging regulations is not just about avoiding penalties, it's about securing a competitive edge in a market increasingly driven by environmental considerations. Join us for a conversation that navigates the intersection of technology, law and international diplomacy, beginning by asking doctor Pirapat what advice he would give students and seasoned legal practitioners to engage more in in the artificial intelligence conversation.
[00:02:32] Speaker B: It's something that you can't avoid. And I would claim that regardless of your major, I mean, a lot of people say that I'm doing constitution law. I don't have to care about it, but of course not, because this kind of technology has the potential to sort of snip into every single area of human interaction, and it could be used by any agent from individual up to the state or even internationally. And when this kind of technology would interact with all these different levels of human interaction or institutions, well, and law is there in order to serve the way to regulate this kind of interaction. You can't say that I'm not going to be interested in it. It is still far from me. It's going to come and it's going to influence your expertise anyway.
[00:03:25] Speaker C: We talked about emerging geni applications to legal practice three and a half years ago.
[00:03:31] Speaker B: Wow.
[00:03:32] Speaker C: And much of what you predicted is now in place?
[00:03:35] Speaker B: Yes.
Good.
It's only something that I can make money out of. I will have been rich already, but unfortunately not.
So. Yeah, I think for the law firms, especially thanks to these large language models. For example, we know chaT, GPTA, we notice foundation models, we know that the application seems to be quite clear. A lot of firms use it as the tools by which lawyers use in order to boost their productivity. And that is mainly in the area of documents because I mean, lawyers need to do it, a lot of documents. But it turns out that the same kind of like advancement, especially in terms of the model and the technology itself, also accommodate them on a front of predictive usage as well. So yeah, I think it's very exciting period, in which we see a lot and a lot of adopt sort of adoption of these new technologies.
[00:04:37] Speaker C: And in terms of AI and legal research efficiency, AI tools are increasingly being used to enhance legal research and case analysis. So how do you see these tools changing the way lawyers conduct research?
[00:04:52] Speaker B: I think it sort of make them more efficient. Right. Because in the old days, when we talk about old days, now we're talking about like in one or two years ago.
[00:05:02] Speaker C: Yes.
[00:05:04] Speaker B: If you are a lawyer and you need to do legal research, it will start with, okay, this is a fact, or this is the questions from your clients. For example, you would like to do the investment here in Thailand. What do you need to care about this kind of thing? And then you, you would have this sort of fundamental knowledge of what kind of areas of law that would come into play here and then you would get into like the documents, the details, and then you search the cases, the relevant cases, relevant legislations, that kind of standard. Right. And this, this kind of like, if you think about like the generative AI, especially the use case of like this chat chatbot, it just cuts like the process in between. Right. Because now you can jump from the roughly the questions, well, of course, with a little bit of prompt engineering, you probably need to tell it a little bit, a little bit more and then it could probably go straight to the solution to that. And that is the kind of the boost to the productivity. But again, this is quite interesting because when you think about this kind of technology at the moment, the stage or the advancement of the technology still be like the inward U shape because this is pretty much the same question that I got this morning as well. I mean, I just taught this kind of AI related issues.
[00:06:25] Speaker C: I hope there was another different podcast?
[00:06:27] Speaker B: No, there was a lecture.
So this is like first time. And the question was basically, do I still need to learn how to program, do programming, like being a lawyer? Because someone told me that you be, you are a lawyer, you might need to do programming. And my answer was like, yeah, I mean, regardless of whether you are lawyer or not, programming is still important. For example, this gene AI, when you talk about programming, it is really good if you are like this sort of moderate to good programmer, but it's probably not that useful if you are like top, top programmers or like very, very advanced programmer. And it's going to be useless as well if you don't know how to program at all. So it's kind of inward U shape that you're going to be able to get the most out of it if you have sort of considerable level of understanding. A lawyer would have the same sort of relationship with this kind of technology.
[00:07:23] Speaker C: You mentioned the lecture this morning, and we're here in your office at Chulalongkong's faculty of Law. Are you embedding these questions and potential training to future lawyers here?
[00:07:36] Speaker B: Of course.
You asked using exact term that we used to promote our new program, which is the international program of law. And this is the second year of it. It's called the Albio program. It's lb in experiential learning. And I think the motto of the program is like, create a future ready lawyer. So, yeah, that's pretty much what we try to embed this kind of technology into the curriculum.
[00:08:05] Speaker C: So looking at cases and legal research, surely there are potential risks and benefits associated with reliance. And perhaps the question is to what extent are legal practitioners reliant on using AI for these tasks or not in terms of their judgment.
[00:08:24] Speaker B: Right. I think what is good about legal, few in general is quite conservative, which is sometimes bad, but in this context, it's pretty good. You might have heard in the news, like, I think roughly a year ago, there was a lawyer who generated a whole document based on this chat GPT, including the precedents. It turns out that these precedents are fake, the hallucination, which is pretty much well known.
But in general, if you probe into the way the lawyers have been using, they still care a lot about this kind of hallucination.
And that is why the rise of this, what they call the technology that.
What's the name of the rack? Yeah, so the rise of the rack, the retrieval augmented generation of the. Basically, you use this generative AI together with the internal database in order to make sure that it would not hallucinate or come up with the data itself, or on the contrary, I think they are still quite strict with the confidentiality that they do have this kind of like protocol in which you are roughly largely prohibited to upload certain kind of information, especially of your clients, on these sort of services, which I think is quite, quite a good starting point, that you have some kind of like precautions.
[00:09:52] Speaker C: And does that include, to a degree, the question of bias and fairness in AI, the systems can sometimes perpetuate or even aggravate bias present in training data. So how can we ensure that AI tools used are fair and unbiased?
[00:10:12] Speaker B: Well, that is a billion dollar question, right?
Honestly, I think that is a big, big concern. And well, I might refer to one of, probably the only one regulation out there now, which is the EU AI regulation or the EU AI act, which actually has come into effect already on the 1 August. So we are living in it, but it will come into full effect in roughly two years time. They care a lot about its sort of bias and fairness. And there are certain kinds of.
On one side, there are certain kinds of applications which these kind of bias and unfairness would cause, like substantial material damages. For example, if you use this in the, this kind of like foundation or fundamental education, these fundamental services, especially the public services, migration or the education, of course, public services of Putin, could probably increase this sort of access to finance, right. That kind of things.
If you are biased, as we know in the news, like if you probably prefer, because the model is trained, based intentionally or not, on certain group of people, like maybe when it comes to health, it trained only on those white people.
Or when it train on the crime, it train only on certain, uh, race or certain people from, um. The status of the economy or faith. Yeah, of faith. And, uh, all of these kind of intentional or mistakes that we, we make, um, would cause this kind of like bias. And the question is, um, when it combines with these kind of applications that affects fundamental rights, affect livelihood of people, I think the AI regulation put great, great emphasis on this, that you have to sort of show that you have thought about it and you have come up with a measure to prevent this kind of bias or unfairness, that is on the regulatory and incentive side. But also at the same time, we do have this kind of technological solution to that. Right. And the way that you train the data, the process in which you train the data to ensure that these kind of bias would not occur, or at least it would minimize the risk of occurring for these kind of biases. So, yeah, it's out there. And I think the key question is what kind of instrument that we're going to put in place in order to make sure that people who are responsible for this kind of technology, especially when they introduce it, sort of take into account all these kind of like misuse or the adverse consequences if we reflect.
[00:13:10] Speaker C: On the development of professional practice in firms.
Do you see partners, senior partners, leveraging this technology themselves? Or do you think it's the responsibility of paralegals or actually somewhere in the middle, some kind of in house, tech driven emerging area of legal practice whose role is to screen and look at mitigating bias in order to then present that to the senior partner?
[00:13:41] Speaker B: Well, very good question. I think this is like any other compliance. It's pretty much the whole organization. And most of the times we know that a lot of policy would need to be top down. Right. And in the organization, this kind of concerns, it has to be made company wide or corporate wide. And of course, the executives, their partners in the law firm or even other kind of consulting, would need to start that kind of intent that we going to take this seriously. And the adoption of this technology.
This is direction. This is the area in which you probably give the green light, yellow or reduced that kind of like color tack kind of application that, okay, if you're going to do this for the legal research, fine, go for it. But here probably some of the caveats. So it is not going to be the adjusted responsibility of someone on the ground or in the field who have to come up with a problem and raise it to the top. But if that is, that is usually the case in practice because they are the one who actually work day to day jobs. But you have to put this kind of process and policy in place as well for the executives to make sure that there is a system in place and to make sure that this is something that executives and the company as a whole really, really taking seriously.
[00:15:10] Speaker C: Do you see this as a paradigm shift for the likes of Herbert Smith and Clifford Chance and the enormous practices, do you think that they're looking at this? I mean, it's easier for practices of your size. You can go faster in terms of engaging these emerging technologies. So as the legal profession is, as you said, fairly conservative at best, how can you actually transform if you're the size of one of the golden firms?
[00:15:43] Speaker B: Yeah, I think, well, there are always two sides of the story, right?
I think the legal industry itself is the industry of trust and credibility. And, well, it takes decades or even century to build trust or this sort of confidence but it probably take just one incident to pretty much abolish everything. And that is why, by the nature of the industry itself, they tend to be relatively more conservative, and that is understandable. But again, in terms of the dynamism of the competition, of course, technology would lead you to a certain extent, but it is probably like other fields in which if you talk about demand, talk about clients, they do care as well that you're going to provide this rich information. You're going to be thorough in terms of the risk and the sort of exposure that they're going to face. But again, they come to the lawyers, they come to the law firms. They would expect, at least in the near future, to have these professionals to provide them and the tools that you're going to use in the backend. That's another story, honestly, for the dynamics of the industry now, I don't see that sort of real sense of urgency that they're going to need to compete against each other in terms of the speed of the productivity that will push them to really take risk in order to use these technologies at any cost. So for the dynamism of the industry, not really. But of course, these big firms do see that they're going to be some new players who come in and going to eat up certain kind of market share. And these are those obligate and regtechs that they have been growing. So, yeah, it's kind of push and pull thing, but at the same time, you have to be really vigilant going forward as well, because that's pretty much, it's pretty much the nature of professional services.
[00:17:44] Speaker C: And what's been the response to the law societies? Are they behind this? Do they realize that this is coming? Are they offering advice and workshops and seminars to get everybody on the same page? What's the case of the title of society?
[00:18:01] Speaker B: I think they are quite active, but in two aspects. Right. We did, I mean, talk about the, the role of technology that would affect the legal industry itself. Well, there have been a lot of discussion as well, a lot of adoption. But as I said, there are quite a few forces that come into play and sort of shape the way that they're going to adopt it and turns out to be relatively slowly. All right. But a lot of talks have been ongoing on the other way around in which do we need to regulate it when it comes? And you can see a lot of top, top law firms, they start to advocate for these kinds of AI regulations. So that is something that have been ongoing as well.
[00:18:51] Speaker C: Yes, I wanted to move on and look at regulatory and compliance challenges as AI technologies advance the regulation and compliance frameworks, perhaps struggling to keep pace. So what regulation changes or developments do you anticipate in the near future?
[00:19:10] Speaker B: Well, I would say that, well, you can't talk about this kind of legulatory regime without talking about the bigger part of it, which is pretty much the, this is a global dynamism, right? We talk about the globalization for many years.
It seems to be more and more prevalent as something unavoidable now, I suppose. And that's going to affect the way that you regulate this technology as well, because as we talked about it earlier, whenever you talk about it, technological regulation is always an attempt to strike a balance between the innovation and regulation. And to achieve that purpose, you have to understand about the context and other kind of policies as well. For example, whenever I talk about this, I'm going to, I'm going to say, like, there are so many different schools and it depends on the readiness of the resources. For example, in the US seems to be market based approach in the sense that you don't tend to regulate any technology.
Well, a lot of people will say, look, Biden administrative had this executive order, if I remember the date correctly, it's 31 October last year, to regulate the government AI in the same manner that AI is regulated in the EU. But that is like for the private AI as well. But it's clear in the campaign of Donald Trump in this election that as soon as, if he win the election, he assumed the position, this executive order is going to be revoked. So that kind of uncertainty. But it seems to be quite clear that in the US, witty approach, they're going to, and they are racing this kind of technology against the rest of the world as well, especially China. So they put more weight on that front that any kind of regulation means cost, and you're going to let market to drive it. If any things go wrong, for example, the fairness and unbiased, you do have this kind of thought and infringement to go to the court anyway. So if you choose that way, you can't. But you have to understand as well that your litigation has to be effective. People do have certain level of understanding so they can protect themselves. Right. But the EU uses the different parts. They try to be a bit more sort of cautious. Right. You come up with certain kind of applications that are relatively riskier and you impose certain kind of obligations that you have to be more careful before you go into introduce these technologies into the market. So, yeah, I mean, the trend in general is going to be quite diverse. Depending on the priority of the government they put and how they position themselves in this sort of AI global value chain as well. It's such a complex, complex picture with different layers. So I don't think it's going to be this kind of unified trend like the Brussels effect that we said. I don't think so. I think it depends on that kind of industrial policy and trade policy side as well that will influence the deregulation on the AI. Unavoidably, yes.
[00:22:35] Speaker C: Doctor Pirapat, that was really interesting. You were talking about the dismantling of globalization, which I think has been a trend for at least a year. At least economists believe so. And we're now looking at this kind of concept of localization. But to what extent.
And that brings me on to your point about regulation and looking at the EU. Would you like to see something embedded within asean members or do you think that's just not practical in the short term? Do you think it would be based on sovereignty?
[00:23:10] Speaker B: Well, it depends on what kind of aspect of AI that we are talking about.
Of course, most of the times regional effort seems to be. I wouldn't say always, but it's almost always better that you aligned. Now we are talking about the technology that's used up the whole data in the Internet, right? We're talking about this technology that just a few players can actually develop it further. Well actually it's just like less than my fingers to the players. And the amount of capital that used in order to advance this kind of technologies even exceed the GDP of Thailand.
So now we are talking about this kind of like seemingly public infrastructure but owned by private users. And again, when it comes to these kind of. Because we were talking about this kind of regulation. Regulation is all about your bargaining or negotiation power as well. Because if you are just so small and you're going to regulate someone, the questions people are going to ask is like what's in it? Why do I need to care? Right? People do care about a european market because there are 450 million people. That's a huge, huge market with, I mean you can take, you can't take individual like as a number, but each individual in european market seems to have more purchasing power comparing to each individual in elsewhere in the world as well. So that is about the regulatory power as well. So come back to your question about AsEAN. We do have a few of these behavior. I mean admittedly Singapore is up there as one of the top, top economies in the world now with huge and huge digital bargaining power on so many fronts. And that is sort of two words external, in which if you can work together as a regent, well of course your voice is going to be heard much more easily, but at the same time within it. Because we know again that this kind of collaboration would need a lot of these sort of data transfer. At least if you're going to develop something, you're going to need this kind of data transfer. And that would require a lot of this collaboration, which I think ASEAN has been taking this issue quite seriously. So yeah, of course, I mean, this kind of like get together of the ASEAN member states is kind of not just good to have, but it will become more and more mandatory in the near future.
[00:25:56] Speaker C: I want to move on and talk about Genai for directors and we'll skew this towards directors of listed companies. So let's start by unpacking business judgment rule and AI decision making. How should directors approach the business judgment rule when integrating AI into their decision making processes?
[00:26:19] Speaker B: Honestly? Okay. The bottom line is I don't think we are at the level of technology that you entrust AI to make a decision, especially at the top.
[00:26:30] Speaker C: Good.
[00:26:31] Speaker B: I think that's the bottom line.
[00:26:32] Speaker C: Yeah.
[00:26:33] Speaker B: Don't blindly use it. Don't ask yourself whether you should invest in this company or not.
[00:26:38] Speaker C: You'd be surprised.
[00:26:41] Speaker B: All right. Yeah, there might be some people doing that and we're going to see that is pretty much natural experiment, right? Yeah. The natural selection of the business.
But I think what is going to be really helpful is this. You could integrate it inside if you talk about decision making. I know that probably you come back to this operation, but I think they are quite closely related because there is this mantra of framework of the PPT where people process technology.
That's always true and it's still true even if the technology is the Aihdeenen. I think I would like to analogize it to this. The first introduction of electricity from the steam engine. The early stage of factory back then was still designed, blueprint was still designed based on the steam engine. And pretty much you have to put all the functions close to the steam engine. Otherwise you're going to get this leakage of energy. And what they did back then was basically to put the electricity generator replacing the steam engine and everything else the same.
And if you look at it statistically, you don't see that much improvement in terms of productivity from at the very beginning of the electricity. It took decades before people start to learn that, oh, actually we don't have to put things source to energy. We can put things by function. So that is something that's going to happen to generative AI as well.
Now, if you see people, they just use the technology on exact same kind of tasks and decision making is probably one of them, but again, don't do that, but also the other kind of operation as well, which is just a t. But you still need to do something with the people and process as well to make really this care of transformation works.
[00:28:38] Speaker C: I mean, I think on the positive side, there is an opportunity to reduce error at this level. If we look at the fiduciary care and obedience and loyalty and disclosure, there is data to support propositions that might not necessarily be accepted because we all have egos at this level. And I would hope that it's seen as a tool that has benefit and value and isn't necessarily a threat for directors at this level.
[00:29:09] Speaker B: I would hope, yes, absolutely, yes.
[00:29:12] Speaker C: And in terms of risk management, as AI becomes more involved in strategic decision making, how should directors potentially manage and mitigate the risks associated with relying on AI systems? Or are we not there yet?
[00:29:27] Speaker B: Well, twofolds reduced. You can of course use it to help you with this sort of assessment process, of course.
Well, if we, if you look at it broader than the generative AI, of course, this kind of, well, I still look at AI as the way that you analyze data. And of course when it comes to this risk assessment, it's pretty much how you crunch the data to understand your risk exposure. And it's kind of like impact and likelihood of things that are going to happen, how you going to mitigate all of those. These AI technologies of course help, but on the other way around, AI itself could be the source of risk as well, especially introduces intuitive business. And of course, I think it's like any kind of, kind of technology or even processes before you kind of introduce it into your business, you have to make sure that you know the value that is really going to contribute. But at the same time, it's not just about the risk that it's going to generate in terms of liability, but as we discussed earlier, there are some other kinds of risks that it's going to create as well. Right? It is sort of unfairness, this sort of bias that is probably something that you may have never taken into account because, well, the risk is pretty much at this scale that is ignorable when human deal with it. Or you have this kind of rely on the public resources, for example, in education, that people are trained to basically deal with this kind of bias, naturally. But you can't sort of take as given that the AI would have trained that manner, in that manner that you take as given when it comes to people. So this kind of changes would introduce some certain kind of risk that, of course, as the director or the executives of the company, you really, really need to bear these in mind.
[00:31:33] Speaker C: You mentioned liability, and I wanted to move on to that. And let's perhaps speculate in cases where AI systems lead to business decisions that result in legal or financial consequences, how might director's liability be affected?
[00:31:50] Speaker B: Well, at this stage, well, I know that there have been this conversation about whether the liability is like, go up to one clear example, is like self driving car. Who is going to be liable if the accident occurs? It's going to be the driver, or it's going to be the company that manufactured the cardinal. I think in terms of that liability, it's still quite clear, especially if you are a director of the company who institute the artificial intelligence technology, it's going to be like any other kind of technology still, that you can't say that, oh, I don't know anything. The artificial intelligence generated everything.
So the liability would be on artificial intelligence. Technically, it's impossible, because the one who would be liable has to be a person. And there are just only two species of persons on this planet, the natural person and juristic person. And we are still quite far away from having an AI as a jouristic person. So it's going to be either a company and of course, a natural person who is in charge, which is the director. And we know that normally it's going to be both. If you can't prove yourself, that you have put these sort of fiduciary duty of care in the very beginning of the process.
[00:33:17] Speaker C: And lastly on this subject, being both in professional practice, you've served your country, and being a very eminent professor here at Tula Longkorn. How do you feel about this opportunity? How do you feel about the emergence of AI within your practice? Are you excited? Are you daunted?
[00:33:38] Speaker B: You know what? I kind of mix. I have read this book and listened to all these experts worldwide every day, honestly, every single day, and I still can't really say that I am excited or I am fearful.
I think it's sort of still in the process of learning. I think what I feel is that everything is gonna change, right? It's like Professor Kurzweil, he wrote a book that singularity is near a few years ago, and now his new book is singularity is nearer. So I think it's quite interesting, because me as a lawyer still, what I always emphasize to my students is, look, the technology is going to change a lot. And honestly, even I myself, who understand these sort of artificial intelligence and machine learning to a certain extent, I dare not to claim that I understand it that much anymore, just in a few years past.
So I think every single profession, economist or lawyer, would have to put their hat on and think about this in the aspects like as things stand and as the projection of the technology advancement would be.
How would you, for example, as a lawyer, would really design this kind of incentive mechanism or instrument in which it would serve the purpose that you set out, for example, you're going to save public from this kind of like, well, misuse of technology.
How would you save a those surveillance from this kind of like military or misuse by, well, the states who are abuse of their power, that kind of things that I really are perplexing about. But of course, if you translate that into excitement, yeah, I'm excited that how would we change the way that we look at the problem and try to advance our tools? You are a lawyer, you advance the legal instrument, you are an economist, you advance this sort of economic policy to really cope with this change, or we actually probably have to take a step back and ask even grand, the question is like the relevancy of your field in tackling that problem, or it should be something of a higher level. Yeah, that does something that I've been perplexing and if you think about it, excitement. Yeah, I think I'm really excited.
[00:36:21] Speaker C: The last part is I want to talk to you about when you served at the Ministry of Commerce. Now, you worked closely with the deputy prime minister during trade agreements with the EU and UAE, amongst others. How did that progress?
[00:36:34] Speaker B: Well, we are progressing well, both of the agreement that. What are the prem framework that you were mentioning? The EU, Thailand, EU FTA negotiation is still ongoing.
The CEPA comprehensive Economic Partnership agreement with the UAE is still ongoing as well.
And those are a few of the negotiations that are ongoing here.
For these negotiations, I think it's quite clear to say that it's going to open so many doors for the economy, for the industries, and that is something I really, really look forward to. But when you ask about experience in working at the Ministry of Commerce, I would say that this is like unimaginable.
Especially it's roughly for the two period of like two to three years, almost three years at the Ministry of Commerce working with two ministries. Two ministers, sorry, Mister Julian Nasnabsid and Minister Boomtown Wesia Shai, who is the current minister. Minister.
And I think it's such a great experience that you, because personally, I mean, when you look at this sort of telly when you were young and you saw these ministers talking to each other in this sort of call on meetings and you were at least me, I was always wondering what do they talk about and how do these people at the very top, top executive level, administrative level of Thailand, how do they actually work and how do we actually make decision?
I think that's something that I really have learned a lot in the past, in the past two or three years. It's kind of like a very intensive course. And again, I think one thing to take away is that there is no easy decision.
Something seemingly obvious from the outside. But whatever decision you make at that level, I think it always includes this kind of trade off. Someone gonna get something, someone gonna lose something. And I think it's all about how would you make sure that you still stick to the principle and at the same time you are aware all these trade off and ensure that you maximize the benefit you will get and at the same time come up with the right solution to mitigate the impact. I know I start to answer like politician, but that's pretty much what I really, really experienced. This is very difficult and use a lot of tools, use a lot of knowledge and a lot of time. Not just heart knowledge, not just about like. Well, you are able to derive automotive probably one help. It's about whom to talk to and when to do what. So that's very, very interesting thing to learn.
[00:39:35] Speaker C: There's nothing wrong to talk like a politician. I tend to live my life vicariously through the people that I talk to, the interviews that I conduct. And seeing you with the press, the formal photographs of you engaging in these sessions, you looked extremely proud and it must have felt very special for you.
[00:39:59] Speaker B: Yeah, it's always a great privilege to serve your country. I think the bottom line is, you know that each and every contribution that you give affects lives and livelihood of people. And it's not everyone, every day they're going to have that opportunity. So of course, such a privilege and such an honor to have the opportunity.
[00:40:27] Speaker C: And the goal is for cross border and transit trade to boost its value to 2 trillion baht by 2027. And you can respond either as a politician or as an arjan or as a legal practitioner.
Is this achievable?
[00:40:42] Speaker B: Oh, it depends on so many factors. Of course, I don't know if that is like politician esque or not, but we know that you planned a lot of policies and these numbers I get from this estimation. And most of the time it depends on this set of condition that all the proposed policies were to become materialized and there is no kind of accident maybe because of the natural courses or because of the crisis or external factors. Of course it's achievable if you can cope with that and try to. For example, we do have this flagship policies. We in the center of Thailand, for example, you talk about digital wallets which you have some certain assumption that it's going to boost economy at least in the short term. This land brush infrastructure and all these policies, as I said, if it fully realizes potentials and at the same time the trade agreements could be strike in a given certain period of expectation and we boost the trade up to the level that we expect it to be given that there's no crisis, there's no external factors which still there's no war, which I really keep my finger crossed. So given that the answer is yes, of course. But at the same time we do know that, well, at times we have to set to go and be on the ground as well. So yeah, that's pretty much my response to that.
[00:42:24] Speaker C: And can you say a few words to our audience who run listed companies about the importance of sustainable practices and green financing aligned Thailand's goals for economic development and carbon neutrality by 2050?
[00:42:41] Speaker B: Yeah, I think it's quite clear that it's something that you can't avoid anymore because we know that.
[00:42:49] Speaker C: Why?
[00:42:50] Speaker B: Well, if you somehow interact with the stock exchange of Thailand that is pretty much imposed as one of the, you call it obligation. Of course it is. Right. It's kind of out there. And that is something that's quite obvious that some kind of regulation, some kind of regulator would put some certain obligations on you in order to comply with it. But at the same time consumers, which is really important, it's not just about the supply push but it also the demand pool as well because the demand, especially from your customers and I don't think it's just your customers with your shareholders as well, they do care a lot about this as well. So.
And so many government have given start to give these kind of like incentives for the firms. So that is like kind of not just to reduce the burden to you probably. So one hand we talk about the sort of export rate that you might get from sort of fail to comply with certain kind of regulations. We know that in the EU there are certain kind of regulations, cbam, right. That they can affect this kind of practice in a way, but within the stock market and this kind of ESG compliance, that is one thing. You got to have some kind of like negative consequence if you don't. But you can have some kind of surplus as well because of the government's sort of incentives and this kind of requirements from so many countries in this sort of like investment regime or investment funding as well. So it's kind of unavoidable and, but at the same time, it's going to be beneficial to your business as well. I mean, in the longer run because those who, I mean, probably not your business, but those suppliers or those in your supply chain who fail to comply with this trend would bear greater and greater costs and that would sort of convert into your cost as well and eventually to your competitiveness. So it's kind of on all fronts. That's why it is really, really relevant to know.
[00:45:10] Speaker A: To our guest doctor pirapat Shoksu Watanaskyul. On our website you can find the program notes and a reading list for this episode. Hacking Kaizen is produced by DSA. We'll be back at the same time next week, but until then, from me, Graham Newman. Many thanks for listening.