What risks does ChatGPT pose and how to avoid them?

By Zarmeen Zehra
July 08, 2023

Federal government has issues advisory warning users of cyber security threats OpenAI tool may present to users

A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. — Reuters

As businesses and content creators flock to the recently-launchedChatGPT — an artificial intelligence (AI) tool for writing — the federal government has issued an advisory warning users of the cyber security threats the OpenAI tool may present to unsuspecting users.

The Microsoft-backed AI tool carries critical risks in the realms of leading cyber threats, such as phishing and malware development, the Cabinet Division warned in its advisory.

It stated: "To prevent the menace of such AI-enabled exploitation, extreme caution, due diligence and due care is to be practiced on a proactive basis."

The document further shared guidelines for users' safety.

ChatGPT-malicious capabilities

Following is a non-exhaustive list of ways malicious actors can use ChatGPT:

a. Malware generation: Malware generation by ChatGPT is no longer a mere theoretical possibility. Its use is already gaining traction and is under discussion in various Dark Web forums.

b. Phishing emails: ChatGPT has demonstrated capability to generate extremely convincing phishing and spear-phishing emails, which carry the possibility and probability of slipping through email provider’s spam-filters.

c. Scam website: With the lowered bar for code generation, ChatGPT can help less-skilled threat actors effortlessly build malicious websites such as masqueraded and phishing-landing pages. For example, malicious actors with zero to little skill can clone an existing website with ChatGPT and then modify it, build fake e-commerce websites or run a site with scareware scams, etc.

d. Disinformation campaigns: With ChatGPT, users have access to software that is able to write extremely convincing prose, generate thousands of fake news stories and social media posts in a fraction of time.

Guidelines/preventive measures

a. Prevention against phishing emails:

b. Anti-masquerading guidelines

(1) Administrators

(2) End-users

(3) Guidelines for ChatGPT users

Official phones MUST NOT be used for ChatGPT.

(4) In case of encountering a security issue while using ChatGPT, please report it immediately to Open AI.

Prevention against disinformation campaigns all government

Departments to undertake following actions as preventive measures:


Next Story >>>

More From Sci-Tech