A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.
The article discusses a new acoustic attack that can steal data from keystrokes with 95% accuracy. The attack works by sampling the audio of keystrokes and using spectrograms to identify the individual keys. While the article mentions that a keylogger is needed to replicate the attack, it raises the possibility that if each key can be identified by sound, it would only require a video of someone writing something to identify other keystrokes using the audio.
Once this technique is improved to require only a few sentences written on video instead of a keylogger, it could potentially be used to compromise the security of individuals, including streamers.
The article discusses a new acoustic attack that can steal data from keystrokes with 95% accuracy. The attack works by sampling the audio of keystrokes and using spectrograms to identify the individual keys. While the article mentions that a keylogger is needed to replicate the attack, it raises the possibility that if each key can be identified by sound, it would only require a video of someone writing something to identify other keystrokes using the audio.
Once this technique is improved to require only a few sentences written on video instead of a keylogger, it could potentially be used to compromise the security of individuals, including streamers.