Cette page est disponible uniquement en anglais

Update
21.03.2023
Since ChatGPT's launch in November 2022, generative artificial intelligence (AI) has captivated a huge audience. It has many potential use cases for different industries and enterprises, such as product design, content creation, data augmentation, and personalisation.

Yet generative AI also poses some challenges and risks within a professional setting, such as legal & ethical issues, quality assurance, user trust, and human-AI collaboration. 

While ChatGPT continues to evolve and already empowers products like Microsoft's Edge web browser, awareness of the challenges and risks is becoming increasingly relevant. To use such tools effectively and safely, it is crucial to understand how they work before engaging with AI technologies within your field. Throughout this article, we use ChatGPT as a basis to outline core principles that users should consider when professionally using (generative) AI applications, or similar generally available applications.

Background and purpose: advancing digital intelligence
Generative AI applications are algorithmic systems that implement machine-learning patterns in order to generate new kinds of content and data, including text, images, video and audio. ChatGPT, for example, uses deep learning, which allows it to mimic human-like responses based on user input (also known as 'prompts') in a conversational manner. However, without proper guidance, the generated responses may raise legal and privacy issues as well as ethical and moral dilemmas. Therefore, our first core principle with any generative AI application is to identify and understand the values of its developer. Only then can a user make informed decisions that support safe and responsible use of AI. 

Open AI, the corporation, research laboratory and developer behind ChatGPT, was founded in 2015 as a non-profit research company by a group of influential individuals. Its purpose is "to ensure that artificial general intelligence (AGI) – by which we mean highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity." Microsoft, as an investor, now holds the largest stake due to three multi-billion-dollar investments since 2019. 

How does generative AI work?
ChatGPT is a 'Large Language Model' that has been trained on vast amounts of data. It generates new text based on the patterns it has learned, which results in seemingly intelligent responses. Nevertheless, it does not have its own thoughts, opinions, or knowledge. The produced output is based on the user's prompt and the data which is statistically most likely to match this prompt. Due to this design, the application attempts to create outputs that are the most similar to the training data. The plausibility of generating an accurate answer is inextricably linked to the training data and the user prompt, which means the better the data and user prompt, the better the answer.

Our second core principle is that we must realise that ChatGPT does not have any human-like understanding of the subject matter. Moreover, these outputs may be inaccurate, untruthful, and otherwise misleading at times. For example, many users think that applications like ChatGPT can comprehend the words and sentences they use, due to its sophisticated deep learning method and transformer architecture. It is important to stress that this is not the case.

I recognise the value of using ChatGPT in a professional setting, but it's crucial to address potential risks such as privacy, bias, and intellectual property issues. It is our duty to use these tools responsibly and take steps to mitigate any associated risks. Remember, AI tools like ChatGPT are not a replacement for human judgement and expertise
Head of NautaDutilh's Technology team Joris Willems

Risks and limitations
Generative AI can produce diverse and unpredictable outputs that may not always align with the user's expectations or needs, because of its design for multiple outcomes and imperfection. Consequently, the third core principle is to acknowledge the risks and limitations of generative AI applications. For example, if the training dataset lacks information or contains outdated information, the output may generate inaccurate answers by combining incorrect snippets of information. This limitation poses infringement risks, as it may not be aware of existing licenses, intellectual property and trade secrets. Violations can have serious consequences and not just financially. They can also damage customer and stakeholder trust and perpetuate harmful instructions or biases. 

Understanding the risks and limitations is crucial for anyone using generative AI applications. In the case of ChatGPT, it is necessary to be aware of at least the following:

  • The AI model was trained by collecting 300 billion words from the Internet, including books, articles and blogs. This may include personal information that was obtained without consent.
  • The application applies the user’s input as new data to train itself under supervision. When using ChatGPT, users may unintentionally expose personal or confidential information and make it publicly available. Furthermore, Open AI has stated that AI trainers review the conversations that users have with ChatGPT to ensure compliance with its policies and security requirements and improve Open AI’s systems. This includes the user’s conversation history.
  • Although Open AI recently established a data deletion process, certain prompts still cannot be deleted from a user's history, let alone changes to the algorithm and/or dataset that might have occurred. Accordingly, it is uncertain whether sensitive data shared by an individual can be permanently deleted.
  • Open AI’s privacy policy currently only covers privacy rights in California. It does not address other state laws or the General Data Protection Regulation (GDPR).
  • Much of ChatGPT’s training data originates from the period before its launch, so the database does not have much knowledge of the world and events after 2021.
  • Open AI has already excluded its liability for any circumstance related to the user’s use of ChatGPT. 

These risks and limitations bring about many uncertainties, such as liability amongst the parties involved in AI-production chain for faulty implementation or generated answers. Consequently, the fourth core principle results from these risks and limitations and concerns the sensitivity of a user’s input. As mentioned, the output of generative AI applications is dependent on and limited by the parameters provided by the user’s prompt (i.e. their input). This means that the output reflects the level of detail in the input through which users may be tempted to overshare. After all, a key aspect of getting ChatGPT to do what you want it to do is creating clear and concise prompts and using specific text formats that work well. In those situations, however, the main consideration and core principle is to refrain from providing personal, confidential or sensitive information. 

It is important to note that generative AI applications like ChatGPT do not have a visible opt-out option for users to control their personal information. Although proper input data management is essential to ensure the accuracy and reliability of the output, the only feasible option to ensure safe usage is to avoid providing personal information for any features at all times. 

Unconditional trust? An absolute no-go
Our fifth and final core principle relates to the credibility of a generative AI application. The credibility of the information generated by such applications is contingent upon its independent verification through reliable sources. Thus, it is important to exercise caution and avoid overreliance on the output of generative AI applications without proper verification.

For ChatGPT, this is especially true given the above-mentioned limitations of the database. Its technology is not immune to bias and other sources of error in the data and algorithms, and it does not account for future legislation or societal developments. Users must acknowledge that the data used to generate responses might not be trustworthy or verifiable. Furthermore, experts can already identify patterns in how ChatGPT generates responses, making it possible to pinpoint the source of generated output. As a result, users should exercise due restraint when using the generated output in a professional capacity, as associated liability within human-AI collaboration may not be transferable to another party. 

Concluding
Users should make a deliberate and informed decision when using publicly accessible Generative AI applications in a business context. This should be based on a clear understanding of the application's purpose, the data being used and the manner in which any liability risks are dealt with. It is advisable to establish internal guidelines and protocols that ensure proper usage within your organisation, such as by describing permitted or prohibited use cases and by limiting the user input to publicly available information. Yet even that may not mitigate the potential risks for your organisation. Our team can provide guidance on the appropriate integration of AI tools into professional operations and advice on internal guidelines for its professional use. For further information on this topic, please contact us. 

Related articles

Notification de cookies

Cette fonctionnalité utilise des cookies tiers. Modifiez votre cookie préférences pour visualiser ce contenu ou afficher plus d'informations.
Ces cookies assurent le bon fonctionnement du site. Ces cookies ne peuvent pas être désactivés.
Ces cookies peuvent être placés par des tiers, tels que YouTube ou Vimeo.
En désactivant certaines catégories, les fonctionnalités associées au sein du site risquent de ne plus fonctionner correctement. Vous pouvez modifier vos préférences ultérieurement. Voir plus d'informations.