What it is: Siri is Apple’s natural language interface.
In the old days, people gave commands to a computer by typing. Such interfaces were known as command-line interfaces where the most popular example of that was MS-DOS. The problem with command-line interfaces was that if you misspelled a command, the computer wouldn’t recognize it. That made using command-line interfaces frustrating and slow.
To solve these problems, the next step up was graphical user interfaces. This displayed commands and icons on the screen so you just needed to point and click on what you wanted to do. You needed a mouse to do this, but graphical user interfaces were generally faster and easier to use than command-line interfaces. The most popular graphical user interface appeared on the Macintosh and later on PCs as Microsoft Windows.
Like command-line interfaces, graphical user interfaces have limitations as well, namely requiring the use of a pointing device such as a mouse, trackball, or trackpad. This means you manipulate an item away from the screen (such as a mouse on a desk) so you can manipulate objects on the screen.
This effectively separates your physical movements from the actual items on the screen. While unnatural at first, it’s easy to get used to this to the point where moving your hand away from the screen seems normal. This is why so many people wrongly insist that the iPad can’t be useful unless you can use it with a mouse. That’s like saying graphical user interfaces can’t be useful unless people are forced to type in commands.
Apple helped usher in the next generation of user interfaces known as touch screens. Instead of moving an object on a desk to move an object on the screen, touch screen interfaces let you manipulate items directly on the screen. Sometimes this can be handy but sometimes this can be clumsy. Yet touch screens are best used when it’s not possible to lug around an additional pointing device, such as smartphones and tablets.
One problem with graphical user interfaces and touch screens is that you need to see the screen to use it. While it’s perfectly possible for blind people to use an iPhone or iPad, most people prefer using their eyes to see a screen. However, this limitation also means that you cannot use a smartphone or tablet easily unless you stare at it, which means you can’t use your eyes to do anything else such as driving.
That’s the beauty of natural language interfaces. Instead of looking at the screen, you can just talk to it. Talking is far faster and easier than typing commands or clicking on icons, plus it has the added advantage of letting you keep your eyes focused on anything but the computer.
So Apple’s next macOS version will include Siri on a Macintosh, which might seem a bit contradictory. After all, why talk to your computer when you can easily type or use a mouse to control it instead?
Most likely, Apple’s goal isn’t to put Siri on the Macintosh and stop there. It’s likely Apple wants data on how people interact with computers so they can improve the natural language interface. Already the next generation of Siri will understand context of previous queries so you can speak more naturally to Siri. This will prove immensely useful for CarPlay, Apple’s in-dash entertainment system that will appear inside car dashboards.
When driving, you cannot take your eyes off the road for even a second without putting yourself (and others) in potential danger. Driving requirers keeping your eyes on the road, but touch screen interfaces require you to look at them. So the eventual goal of Siri is to let people control devices completely through voice commands.
This will let you give simple commands to CarPlay such as turning on the radio or adjusting the heat. However, you can also get driving directions or ask Siri to help you find specific types of restaurants. When you think about how people use computers, they tend to use them in one of two ways:
- To create and modify information
- To access information
Creating text information is easy with a keyboard and creating graphical information is easy with a mouse or touch screen. Accessing information is much clumsier with a keyboard, mouse, or touch screen because you have to tell the computer what you want to find and then you have to repetitively shift through the retrieved data until you find what you want.
Siri’s ultimate goal is to make accessing information much faster. Just talk and Siri gives you the answer. If it’s not what you want, keep talking until Siri gives you the information you want.
Siri likely won’t be perfect at first but will get successively better over time. Eventually, Siri could be used to help people create text information as well by letting you dictate to the computer and have it type your words.
For people who dislike typing, such a natural language interface would be far superior than using a keyboard. Even for people comfortable with typing, dictating can be an option when you can’t use a keyboard such as dictating ideas while jogging.
One drawback with Siri is that you need to speak out loud, which limits its use to private areas. You might ask Siri questions in public, but you won’t ask Siri any questions in the middle of a movie or opera where your spoken voice will disturb others.
Siri won’t replace keyboard, mouse, or touch screen completely, but will compliment them. In some cases, Siri can replace traditional user input devices altogether such as in CarPlay.
Siri is just another way to interact with a computer in a faster manner. Siri and its competitors will just keep getting better over time until people will wonder how anyone ever used a computer without talking to it like a person.