Months after the launch of the highly popular ChatGPT, tech experts are flagging issues associated with chatbots, such as snooping and misleading data.
ChatGPT, developed by Microsoft-backed OpenAI, has proven to be a useful artificial intelligence (AI) tool as people use it to write letters and poems. But those who looked very closely at it found multiple instances of inaccuracies that also cast doubt on its applicability.
READ ALSO | How to Use ChatGPT: A Step-by-Step Guide to Using OpenAI’s Human-Like Language Model
Reports also suggest it has the ability to pick up on the biases of the people it trains and produce offensive content that could be sexist, racist or otherwise.
For example, Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, shared a tweet stating, “Microsoft’s AI chatbot told a reporter that it wants to be ‘free’ and is spreading propaganda and misinformation. It even urged the reporter to leave his wife.”
However, when it comes to China’s plans for the AI chatbot race, big companies like Baidu and Alibaba have already started the process. But as far as the biased AI chatbot is concerned, it is believed that the CCP government will not disappoint, as Beijing is known for its censorship and propaganda practices.
Bad data
Since many people are crazy about such chatbots, they miss fundamental threat issues associated with such technologies. For example, experts agree that chatbots can be poisoned by incorrect information that can create a misleading data environment.
Priya Ranjan Panigrahy, founder and CEO of Ceptes, told News18: “Not only a misleading data system, but also how the model is used, especially in applications such as natural language processing, chatbots and other AI-driven systems, can be affected at the same time.”
Major Vineet Kumar, founder and global president of Cyberpeace Foundation, believes that the quality of the data used to train AI models is crucial and that bad data can lead to biased, inaccurate or inappropriate responses.
He suggested that the creators of these chatbots should create a strong and robust policy framework to prevent any misuse of technology.
READ ALSO | Velocity Launches India’s First ChatGPT Powered AI Chatbot ‘Lexi’
Kumar said: “To mitigate these risks, it is important for AI developers and researchers to carefully collect and evaluate the data used to train AI systems, as well as monitor and test the output of these systems. on accuracy and bias.”
According to him, it is also important that governments, organizations and individuals are aware of such risks and hold AI developers accountable for the responsible development and deployment of AI systems.
Security issues
News18 asked tech experts if it’s safe to log into these AI chatbots, taking into account cybersecurity issues and snooping capabilities.
Shrikant Bhalerao, founder and CEO of Seracle said, “Whether it is a chatbot or not, we should always think before sharing personal information or logging into a system over the internet, but yes, we should be extra careful with AI-driven interfaces such as chatbot. because they can use the data on a larger scale.”
In addition, he said that no system or platform is completely immune to hacking or data breaches. So even if a chatbot is designed with strict security measures, it’s still possible your information could be compromised if the system is breached, the expert noted.
Meanwhile, Ceptes CEO Panigrahy said some chatbots can be designed with strong security and privacy safeguards, while others can be designed with weaker safeguards or even with the intent to collect and exploit user data.
He said: “It is important to check the privacy policy and terms of service of any chatbot you use. This policy should specify what types of data are collected, how that data is used and stored, and how it may be shared with third parties.”
READ ALSO | Five ChatGPT extensions you can use in the Chrome browser
In this case, CPF founder Kumar stated that there may be several concerns and potential threats to consider, including privacy and security, misinformation and propaganda, censorship and suppression of free speech, competition and market dominance, as well as supervision.
He said: “While there are potential concerns about the development and use of AI chatbots, it is essential to consider the specific risks and benefits of each technology on a case-by-case basis. Ultimately, responsible development and deployment of AI technologies will require a combination of technical expertise, ethical considerations and regulatory oversight.”
In addition, Kumar stated that “ethical AI” is crucial to ensuring that AI systems, including chatbots, are used for the betterment of society and not harm.
Read all the latest technical news here