Malicious Amazon Alexa Vetting
Amazon’s skill vetting process for the Alexa voice assistant ecosystem that could allow a malicious actor to publish a deceptive skill under any arbitrary developer name and even make backend code changes after approval to trick users into giving up sensitive information.
Amazon Alexa allows third-party developers to create additional functionality for devices such as Echo smart speakers by configuring “skills” that run on top of the voice assistant, thereby making it easy for users to initiate a conversation with the skill and complete a specific task.
The findings is the concern that a user can activate a wrong skill, which can have severe consequences if the skill that’s triggered is designed with insidious intent.
The actual criteria Amazon uses to auto-enable a specific skill among several skills with the same invocation names remain unknown, the researchers cautioned it’s possible to activate the wrong skill and that an adversary can get away with publishing skills using well-known company names.
This primarily happens because Amazon currently does not employ any automated approach to detect infringements for the use of third-party trademarks, and depends on manual vetting to catch such malevolent attempts which are prone to human error.
An attacker can make code changes following a skill’s approval to coax a user into revealing sensitive information like phone numbers and addresses by triggering a dormant intent.
This is analogous to a technique called versioning that’s used to bypass verification defences. Versioning refers to submitting a benign version of an app to the Android or iOS app store to build trust among users, only to replace the codebase over time with additional malicious functionality through updates at a later date.