StrategiX Advisory

Governance Of Generative AI

If you have not been hearing the latest buzz on ChatGPT, you should get up from under the rock! GPT for the uninformed stands for Generative Pretrained Transformer.

A small primer on what it stands for:  Generative as it generates unsupervised or semi-supervised AI to generate new data: text, audio, video and images based on certain input. Pre-trained as it learned a lot of information prior to 2021 (which also explains why the information is stale). Transformer is a type of neural network that uses sequence of data rather than individual data points. This makes it very efficient to get context of text and have conversations. A key feature of ChatGPT is that it is not “stateless” like most other AI Bots. It can remember what you have told it before making it very useful for personalized conversations on any topic. Since it has been trained across billions of human opinion data, it comes across as moderate by design with no particular bias, which is good as a lot of AI has the same inherent bias as the training data.

While ChatGPT became viral and amassed a million users in its first 5 days, it generated a lot of comic interest from several teens and adults who posted outputs/jokes from ChatGPT on their feed. There have been a lot of concerns that this will lead to academic dishonesty as the traditional plagiarism checkers such as Turnitin etc. could not detect any plagiarism in the output from this bot as it does not work as a search engine and cut and paste into its text. It leverages NLP to generate text contextualized to the question. The Internet went abuzz on a report that it recently passed a test created by a Harvard professor.

While the applications of this are limitless ranging from an example Satya Nadella quoted where an Indian farmer was able to fill out government forms in a portal, to assisting developers in autocompleting their programming tasks. According to Gartner by 2025, 50% of all drug discovery will be using Generative AI. In another prediction, 30% of all outbound marketing messages from large organizations will be generated from AI.

While there are several positives, the primary limitation is that of governance and trust. It has the possibility of generating massive amounts of disinformation as it is not governed. How can one trust the output? What if we can misuse this to create dangerous malware. Artists are worried that the quality of the output is so high that it may generate original looking pieces with their style that would be indistinguishable by the artist itself.

Organizations such as Microsoft (who spent enormous money investing in OpenAI) and other leading organizations should spend R&D investment in addressing the grave issues of Governance and Trust of Generative AI to avoid far-reaching consequences. Policy makers and government bodies should take a close look at how it can prevent huge licensing, copyright infringement issues and curb mass propagation of misinformation.

We definitely have taken an exponential leap forward in AI and need to quickly get our arms around it with appropriate compensating controls to avoid the rapid destruction of the ethical fabric of our society.

Leave a Reply

Your email address will not be published. Required fields are marked *