Alexa Auto SDK Launched

August 14, 2018  |  By Pavel Stankoulov  |       


Our two-part blog post (part 1) and (part 2) published at the end of 2017 explored the Alexa Device APIs and their applicability for in-vehicle use. At CES 2018, Amazon announced Alexa extensions specifically for vehicles (see: but didn’t publicly release actual code or SDKs. On August 9th, 2018 Amazon released the first version of the Alexa Auto SDK on github. This blog describes what is included in the new SDK release.

Alexa Auto SDK

The newly released Alexa Auto Software Development Kit (SDK) previously known as Alexa Automotive Core (AAC) includes many of the features that were announced at CES, such as in-vehicle navigation integration, phone call integration, local searches, etc.

The architecture also supports the possibility to add new platform interfaces. It is very likely that Amazon will continue adding new vehicle specific integration interfaces that utilize more of the vehicle data and available controls such as climate control, radio, etc.

The publicly announced SDK does not currently include or mention the Alexa OnBoard feature which was originally announced and demonstrated at CES. Alexa OnBoard provided the ability to perform some of the speech recognition functions on-board in the headunit without the need for internet connectivity. This is an important use case for a moving vehicle as internet connectivity is not guaranteed everywhere. It is possible that this feature is available as a private extension to OEMs.

The new Alexa Auto SDK is available on github here: It is well documented and includes the necessary steps to build and test the SDK. The SDK contains the software for the client-side (headunit) that is needed to integrate Alexa voice services in the car.



The figure below shows an architecture diagram of the Alexa Auto SDK and how it fits into a typical in-vehicle system.



The Alexa automotive client typically runs on the In-vehicle infotainment (IVI) system as it needs access to the microphone, speakers, display, etc.

The Alexa Auto SDK is a wrapper around the Alexa Device SDK (, which is used by other types of devices such as smart speakers, smart TVs, etc. The Alexa Auto SDK provides automotive specific components and hooks to integrate with the vehicle system. For example, the Navigation module allows Alexa to control the on-board navigation system.

Alexa is a cloud-based service and as such requires connectivity to the internet. This could either be provided via embedded modem or via the end-user’s smartphone. For connectivity via the smartphone, the platform needs an additional connectivity solution such as Abalta’s SmartLink (see The network connectivity status is reported to the Alexa SDK via the Network component that needs to be customized for the specific in-vehicle platform and connectivity method.

The Alexa SDK can provide visual feedback via the head-unit display. The Alerts, Notifications and TemplateRuntime components typically have visual representation that need to be handled using the local HMI framework. The richest UI component is TemplateRuntime, which is responsible for rendering the Alexa display cards (e.g. news, weather, etc.).

Target Platforms

The Alexa Auto SDK is supported on the following platforms:

  • Android ARM 32-bit
  • Android ARM 64-bit (using 32-bit build)
  • Android x86
  • QNX ARM 64-bit

With the source and CMake files provided, the SDK can be ported to other platforms as well, such as Linux. The SDK depends on the Alexa Device SDK that also needs to be ported to the target platform.

For Android, JNI wrappers are provided to allow execution of the Alexa Auto APIs from Java code.


Location Provider

One of the big differences between the Alexa Auto SDK and the Alexa Device SDK is the Location Engine. It allows the in-vehicle system to register a LocationProvider object that provides the latest vehicle position as read from the car’s GPS.

The Alexa Device SDK is designed for stationary devices and didn’t have such API. The only way to set the location of the device was manually by the user via the web-portal or the smartphone application. With the new API the vehicle can update its location regularly and receive much more accurate location-aware information such as local searches, weather, etc.


Alexa Trigger - Hardware Buttons / Steering Wheel Controls

The Alexa Auto SDK allows the client application to initiate the Alexa voice recognition engine with a push to talk button. This can be done via the SpeechRecognizer component. The client application can potentially distinguish between the on-board speech recognition and Alexa through different tap/hold patterns. For example, one click starts the on-board speech recognition, while press and hold might invoke Alexa.

In addition, the SDK provides PlaybackController component that enables the client application to control the Alexa music playback via play, pause, next and previous buttons. Modern vehicles typically have hardware buttons for media control on the steering wheel or on the head-unit system. This way the user can control the Alexa media playback using the same familiar interface.


Wake Word Support

Today, most Amazon Alexa devices take advantage of wake word engine technology where the user can simply say “Alexa” to trigger the voice recognition engine without a button press. A wake word engine is a software component that constantly listens on the microphone input and detects special word such as “Alexa” and then passes the audio stream to the Speech Recognizer module.

The Alexa Auto SDK does not come with a wake word engine out of the box. According to the SDK documentation, this is something that needs to be requested from the Alexa Auto Solution Architect (SA) assigned to the project. The Alexa team will provide a package that includes the wake word engine for the target platform. For details see:


Navigation Integration

The Alexa Auto SDK supports control of the in-vehicle navigation system via voice. The integration is done through the Navigation module:

The module provides callbacks for the various navigation events requested through the Alexa Voice Services. The platform integration code should route those events to the local navigation application.

Currently, the Navigation interface only supports commands to set a destination and cancel existing navigation session.


Phone Control

A typical IVI has phone control functionality via the Bluetooth’s Hands-Free Profile. Through the Alexa Auto SDK, the user can control the phone call with Alexa. The SDK provides PhoneCallController component ( which needs to integrate with the phone calling application on the IVI.

The PhoneCallController receives the directives from Alexa (e.g. start or end call) and passes them to the phone application. It also relays back the current phone call status.


Custom Skills

As discussed in a previous blog, the in-vehicle experience can be further extended through Alexa Custom Skills. A parallel channel can send data from the vehicle to the custom skill to report status and the skill can instruct the vehicle to perform actions as a result of the user’s voice request. This way new vehicle services can be added that are not yet part of the Alexa Auto SDK. Although the Alexa team will likely add new features (built-in skills), some of them might not be available in time or OEMs might want custom features to differentiate their solution. The custom skills route is a fast way of adding custom functionality to the Alexa in-vehicle client.


As part of the plan to put Alexa everywhere, Amazon has made a big step in proving support to the  automotive industry. Following the announcements of Alexa Automotive Core at CES, the first version of the SDK is finally available. It shows Amazon’s vision of deeper integration with the car. For now, it includes basic controls such as navigation, phone call and music control. There will likely be other releases soon with more integration interfaces. Voice control in the car makes a lot of sense and with the new Alexa Auto SDK, Amazon are making their popular voice assistant useful in the car.

Topics: Connected Car - Technology, Alexa, Voice Recognition

Pavel Stankoulov

Pavel Stankoulov, CTO, leads Abalta's R&D efforts including the development of SmartLink and WebLink.

Leave A Reply