ChatGPT is the topic on almost everyone’s lips at this great juncture in Vietnam. Even high school kids are also being trained for this technology so that they could know how and when to use and not to use it in their future careers. The big question remains, nonetheless, as to the legal implications this technology carries, especially in Vietnam.
By and large, ChatGPT is a chatbot that provides solutions to complicated and problematic issues, such as writing essays, coding, or drafting business proposals. By accessing a large dataset on myriad subjects, ChatGPT analyzes human language, learns the patterns, and eventually generates natural-sounding text that is pretty indistinguishable from that of a native human speaker. According to the public media, just a few weeks after launching, this AI has reached almost 10 million users and has no signal to stop there.
The hanging question is whether the operation of ChatGPT would pose any concern for local regulators. Web-based chatbots with which users can ask questions and interact such as ChatGPT are not expressly envisaged under Vietnamese laws. Vietnam currently has no specific laws regulating AI, its development/training, or its application in products and services. To this extent, no immediate prohibitions or restrictions seem to be imposed on ChatGPT. Local regulators’ reactions towards web-based chatbots and other AI-empowered products remain to be watched closely in a couple of months.
The operation and use of ChatGPT, however, are not risk-free. ChatGPT, similar to almost other kinds of AI models, requires massive amounts of data for self-training and improvement. The quality of the data sources will largely determine the end results generated by ChatGPT. Among others, websites associated with personal information misuse have long been a threat to data privacy, which can now be exacerbated with the large-scale application of this AI technology (unauthorized mass exfiltration and access to confidential data in cyberspace). Phishing emails, malicious software, or botnet attacks are also said to be ChatGPT’s security risks that all beg regulatory supervision. With a more cautious approach towards cybersecurity and personal data protection, there might be no surprise for more stringent legislative and enforcement actions taken by Vietnamese regulators in the future, the first few initiatives among which will be the enforcement of new cybersecurity rules as well as the enactment of the country’s first-ever comprehensive legal instrument on personal data protection.
From another perspective, by offering ChatGPT, the service providers may be deemed as a trader in a distance contract (i.e., a contract entered into between a consumer and a trader via an electronic device or by telephone), as well as a provider of continuous service (a service that is provided with an indefinite term), and, therefore, is subject to certain informational disclosure and consumer interest assurance obligations under Vietnamese consumer protection laws. As mentioned, a data-trained AI like ChatGPT can only learn how to deal with situations raised by the dataset it has seen. Inaccurate data (e.g., where data is derived from a recently amended legislation) or biased data (e.g., where data is not representative in terms of gender and race), the outputs might be at fault. All of these can have adverse effects on the adoption of this technology, especially when no other check-and-balance mechanisms are put in place.