How the Internet of Things is Shifting the Way We Think about Content

“Alexa, is it supposed to rain today?”

Alexa, Siri, Google Home, and other smart devices have shifted how people consume content. It’s no longer enough to just have your information available in a visual presentation on a web page or mobile app. Users want to interact with content; to ask questions, get information quickly, and ask follow up questions.

The Internet of Things (IoT), while not a new concept, is still a relatively new industry. Brands are continuing to learn how to get their content onto the latest device while consumers are ready to get their hands on whatever device comes out next. This usually leads to marketing and IT teams scrambling to keep up with customer demands while also trying to ensure the quality of their digital experience stays intact.

The Shift: Content is Now a Conversation

What does shifting the way we think about content actually mean? When creating content for the IoT it’s important to remember that your users won’t be reading your information off a screen and clicking on what they want to learn more about, they’ll be asking questions, sometimes multiple questions, to get the information they’re looking for. They’ll also only be able to digest a certain amount of information at a time – your content needs to reflect this new type of interaction.

Your Content is Only as Good as the Platform

You can build out all the amazing content you want, but if you don’t have a platform that can deliver it to the IoT, your customers won’t be able to consume it. This is why many brands are turning to Content-as-a-Service (CaaS) as a way to meet the content needs of their customers without bogging down their IT departments with constant demands for device compatibility.

CaaS, or Headless CMS as some call it, is a way to separate your content from your presentation layer, using APIs to connect to various devices; therefore, future-proofing your content. This means your content can be pushed to devices like Alexa, Siri, and other IoT devices via APIs.

“Alexa, what’s for lunch?” | Building Alexa Skills for Gettysburg College

When students started to request the ability to ask Alexa about the campus dining menu, Rod Tosten, VP of Information Technology at Gettysburg College, and his team took the request to heart and starting to think about building out an IoT strategy for Gettysburg College, starting with Amazon’s Alexa.

As the most visited page on the website, Gettysburg campus dining seemed to be the most logical place to start. Tosten and his team created an Alexa Skill to allow students to ask about campus dining – what’s on the menu, hours of operation for each location, and more. Building off of this first Skill, they continued to create Skills for both the campus phone directory and campus news.

Gettysburg College uses dotCMS as their content management system in order to feed content into their Alexa Skills using various methods, depending on the information needed. For example, the campus dining Skill uses a JSON call request to get a JSON object and parse the response to pull out the content needed. The college’s news Skill, which reports on sports news, mentions in the media and more, hits RSS Feeds from the CMS in order to report headlines, summaries, and more detailed story information. Though Gettysburg College Alexa Skills use different methods to pull information from their CMS, all content is stored in dotCMS so it only has to be added one time and in one location.

Tosten’s Considerations When Developing for Alexa

Developing for Alexa and other smart devices requires a shift in how you think about content and how it will be consumed. Tosten and his team noted four main considerations when designing and developing an Alexa Skill that both marketers and developers need to keep in mind.

  1. The content: What information are you trying to share with the user and where is it? How much processing does it take to form the content into speech? How should the content be formed to work easily with a web browser, speech, and mobile display? This is related to the CMS or information/web service.
  2. The conversation and flow: How will the user interact with Alexa and the content? This results in an intention and utterance, or sentence structure, model for Alexa to follow to help it know what words are important and what types of words might be in each question. Think about what the user is going to need in regards to help or guidance to get the correct information.
  3. The speech and card presentation: What sentence format is desired? What language issues are there? For example, should 123 be pronounced as “one two three” or “one hundred twenty-three”? How should the content be displayed for the Alexa App card on the mobile phone?
  4. Implementation Strategy: How are the issues or considerations in the above going to be implemented or translated into a Speechlet that will handle the requests by the user?

Tosten on the Future of IoT

We’ve come a long way with smart devices and the platforms to support them, but looking to the future of IoT, Tosten feels it’s not hard to see that Artificial Intelligence (AI) will be a key part of what comes next. The future will require smart devices to be even smarter – they’ll need to not only be able to know how to retrieve content and go back and forth with the user in conversation, but they’ll need to learn who the user is and their habits in order to make the experience more seamless. AI integrated into IoT devices, like Alexa, will better anticipate the user’s needs when they seek information and be able to converse with the user on a more personal level.

For more on what the future of IoT holds as well as an in-depth look at how Gettysburg College is using Alexa Skills, check out these videos.