PassGPT is a password guessing model trained on a large collection of leaked passwords, with the goal of generating stronger and more complex passwords for users.
- Researchers at ETH Zürich and others developed PassGPT, a password guessing model trained on leaked passwords.
- PassGPT uses progressive sampling to build complex passwords on a character-by-character basis.
- PassGPT is able to guess passwords not seen by other models, learns patterns in multiple languages, and can analyze password strength.
Researchers at ETH Zurich, Swiss Data Science Center, and SRI International in New York have used the power of OpenAI’s GPT-2 architecture to develop PassGPT , a password guessing model, built on top of a large language model (LLM), and trained on a gigantic treasury of leaked passwords from various hacks and exploits.
The main goal behind PassGPT is to crack the cryptic features embedded in the maze of human-generated passwords, aiming to give users stronger and more complex passwords to use and detect likely passwords based on a set of inputs.