What is AI?
Artificial Intelligence, or AI, is the name given to learning algorithms that can adapt software behavior in response to a variety of inputs. You have probably interacted with AI already, as it is used in customer service chatbots that tell you whether your order has been dispatched yet or not and other issues. AI allows those chatbots to analyze what you type and respond in a natural way to questions.
But AI is advancing rapidly, and while those simple website chatbots illustrate the kind of tasks AI has been doing for years, things like ChatGPT, which turns your questions and suggestions into complete, natural sounding articles and can even write computer code, show where generative AI is heading in the future.
What are the uses of Generative AI in business/finance?
As the basic AI chatbot shows, AI systems, especially the refined, more complex generative AI solutions now available, are excellent at taking input from customers and providing data driven responses. Where simple systems may simply look at order numbers and shipping data, the more comprehensive AI systems now available can easily manage account status and other support needs, product recommendations and more, as well as facilitate any internal processes built on data analysis.
By leveraging AI in this way, business organizations can streamline both their customer facing activities and internal operations, building in efficiencies that deliver lower costs and faster responses.
The ability to look up, analyze, and manage large volumes of data is at the heart of AI, and this makes the technology particularly useful across a vast array of industries, basically anywhere that data drives productivity.
The technology can also support your existing processes, offering more accurate analysis of data that helps to build more effective insight and planning. However, while AI seems to offer all things to all businesses, there are some concerns.
The major issue with AI is the lack of transparency, in that most customers won’t understand how it works. In an environment where customers are very aware of potential security risks, the idea of handing the management of large amounts of data, including potentially sensitive customer information, to software can lead to push-back. It is important that there is an explainable AI that people can understand, to build trust and avoid concern.
Even with effective explanation of AI and how it works, while it may be possible to address customers’ concerns, large volumes of data in one environment are always a security risk. In order to minimize the risk of illegal accesses and data breaches, cybersecurity is an essential aspect of adopting AI solutions.
Finally, while we call it artificial intelligence, a learning algorithm, the reality is that AI systems are only as good as the data they learn from. In fact, in addition to potential biases entering into the algorithm because of technical limitations of its design, if the data input is biased in any way, then the outcomes from the AI will always be distorted and follow that same bias. To guarantee that AI systems offer equal and dependable outputs, rigorous checks on training data, algorithmic fairness, and bias mitigation measures are required.
Generative AI – Legal Framework
Legal specialists, as well as national and international lawmakers, have started questioning if the current legal framework which might be interpreted as to apply to generative AI is effective and actually sufficient. The versatility of such an innovative technology touches upon many different areas, so that the current legal framework that applies to it results to be very complex, fragmented, and multi-layered. In fact, risks potentially originated by the use of generative AI in a business context may relate to a series of very different areas of law, spanning from privacy through cyber-security, employment, intellectual property, and consumers protection, just to mention a few.
The EU – as part of its digital strategy – is working on a comprehensive legal package (the so called “AI Act”) addressing some AI-related matters. The aim is to ensure a more responsible, transparent, traceable, non-discriminatory, and environmentally friendly development and use of this technology. Starting points are the conceiving of a uniform definition of AI and the diversification of AI technologies on risk-based standards. Pillar of the AI Act is – however – the oversightprinciple: AI systems should be overseen by people, rather than by other automated technologies.
The need of human oversight on AI technologies, even if mitigated by risk-based standards, will require businesses that use generative AI to meet specific requirements. In this respect, corporate governance will play a pivotal role. In fact, said businesses will need to be equipped with a highly sophisticated governance model encompassing oversight and record keeping policies and procedures, as well as adequate risk-management tools, to ensure AI-related risks are effectively mitigated and to prevent harmful outcomes.
The oversight on the use of generative AI is essential both if said technology is used within the decision-making process and outside of it (as complementary tool exploited within the business process). In the first scenario, for instance, it might be advisable to clearly identify the matters on which AI can be actually used for decision-making purposes (by excluding the most sensitive ones), as well as to impose certain disclosure obligations on the internal users to make the management aware of such use on a case by case basis. In the second scenario, in addition to dedicated internal policies and tools aimed at avoiding data breaches, it might be necessary to plan routine tests to assess the AI output quality and fairness.
van Berings helps its Clients navigate the complex world of generative AI regulations and offer them sophisticated legal advice in the set up of tailored governance models and internal policies addressing this new business challenge.