Research done by the Salt Labs team has revealed three key vulnerabilities linked to ChatGPT plugins. 

The first vulnerability occurred when users installed new plugins. The process of installing new plugins includes a website in which users must approve a code. Once the code is approved, the plugin is automatically installed. The research discovered that malicious actors could exploit this process by delivering code approvals with malicious plugins. This could allow an attacker to gain access to a user's account.

The second vulnerability was detected within a framework used to develop plugins, PluginLab. During installation, user accounts were not properly authenticated. This could open the door for a malicious actor to insert an unauthorized identification into the account, allowing an attacker to be presented as the user. 

The final vulnerability involved several plugins with open authorization redirection manipulation, leading to an account takeover via the plugin. By sending a malicious link to a user, an attacker could obtain user credentials. 

Researchers on this project informed OpenAI and related third parties of the vulnerabilities, and the issues have since been addressed.