The second wave of 802.11ac is coming ashore and the new MU-MIMO technology (Multi-User, Multiple Input Multiple Output) is going to make a splash. It’s one of the biggest improvements to Wi-Fi we’ve seen to date, with the potential to greatly increase wireless network throughput and make a huge difference in dense, high capacity networks.
We saw MU-MIMO technology in action at a recent Qualcomm event. Instead of increasing the speed of just one Wi-Fi client, MU-MIMO improves the entire network, even delivering better results for unsupported devices.
Previous wireless standards and technologies have greatly increased data rates, but until now the increase only applied to one user at a time. For instance, SU-MIMO (Single-User MIMO) with 802.11n allows up to four streams of data to be simultaneously sent and received between a single user and the access point.
However, MU-MIMO with 802.11ac allows access points to simultaneously send one or more streams to multiple users, which has a greater impact across the entire network.
This graphic shows how SU-MIMO can communicate with clients only individually, whereas MU-MIMO allows simultaneous communication with multiple clients.
This graphic depicts how MU-MIMO can send three times the amount of data compared to SU-MIMO in the same amount of time, more than doubling the data rate of each device.
Visualizing how MIMO works
Imagine waiting in line to enter an event or arena that has four different entrance doors. The waiting line would resemble an access point, the people would resemble the data, and the doors resemble the receivers, the Wi-Fi clients.
Without MIMO, a random number of people (data) would be allowed to enter one of the doors (Wi-Fi devices) at a time. That door would close and the next group would enter through a different door (Wi-Fi device). This isn’t the best approach as only one door (Wi-Fi device) is open at once, slowing down how quickly the people (data) in the waiting line (access point) enter.
With MIMO, there are four big waiting lines (four data streams) leading up to the entrance of the event, again with four different doors or gates. Each waiting line resembles a data stream and the group of lines altogether resemble the access point. Again, the four doors represent the receivers of the data, the Wi-Fi clients.
If you’re running SU-MIMO, a random number of people (data) from each of the four waiting lines (data streams) enters into just one of the doors (Wi-Fi clients), which remains open all the time. This increases the speed at each waiting line entering into the event; however, it still doesn’t make use of all four doors.
With MU-MIMO, people (data) from each waiting line (data streams) simultaneously enter through all the doors. Everyone enters faster because each line can enter through a different door.
Remember, right now MU-MIMO only works for the downlink connection: for example, from the access point to your phone, laptop, and other Wi-Fi devices. Thus devices will still have to contend with each other when transmitting to the access point. This would be like allowing people (data) from all waiting lines (data streams) to enter simultaneously into all the doors (Wi-Fi devices) but alternate which doors are used when exiting (sending back to the access point).
Helps with user density and capacity
Wi-Fi has always suffered from density and capacity issues, especially in the small and crowded 2.4GHz band. Using 802.11n or 802.11ac in the 5GHz band helps by providing many more channels and faster data rates. However, MU-MIMO helps even more as multiple devices can be served simultaneously. This leads to increased throughput, frees up more airtime, and allows access points to serve larger crowds of devices.
It’s important to note that MU-MIMO can increase throughput as described without requiring channel bonding, although it can be utilized with any of the channel widths. Back with 802.11n, two 20MHz channels could be bonded, regardless of using SU-MIMO, enabling more data to be transferred at once. These 40MHz channels could be acceptable in the 5GHz band where there’s more frequency space, however it’s pretty much out of the question for the small and crowded 2.4GHz band. Then with Wave 1 of 802.11ac we had the ability to use 80MHz channels in 5GHz, again with or without SU-MIMO. Now with Wave 2 that number doubles again, giving us up to 160MHz wide channels, that can be used with SU-MIMO, MU-MIMO, or neither.
You might not want to utilize 160MHz channels since it greatly reduces the amount of channels you have to use in the 5GHz band, but you might consider using 40 or 80MHz to help increase throughput rates even more.
Doesn’t require advanced client device
SU-MIMO required both end-user devices and access points to support the technology and contain multiple antennas. Furthermore, for the client to receive the multiple concurrent streams it had to perform signal processing. The more antennas and streams a device supports, the more power, size and cost it requires, which is why many end-user devices are still single stream. This isn’t a problem with MU-MIMO, as the client isn’t the one performing the signal processing; the burden falls on the access point.
Although MU-MIMO still requires end-user devices to support the technology in addition to the access point, they can have as little as one antenna and still be served their single stream simultaneously with other devices.
You actually see the biggest difference with MU-MIMO when there are devices that support fewer data streams, versus those that support more. For instance, a four-stream MU-MIMO access point will send data at the same rate that a four-stream SU-MIMO access point would; MU-MIMO doesn’t directly help in this situation. The access point wouldn’t be able to serve other clients.
Not requiring multi-antenna clients also helps the adoption of MU-MIMO on public Wi-Fi hotspots. SU-MIMO isn’t as present as much on access points and hotspot gateways as we’ll likely see with MU-MIMO, because more devices will likely support the newer technology due to the eased requirements. Thus we can basically expect better performing public Wi-Fi networks as more devices adopt the technology.
Older clients can see higher data rates
Although MU-MIMO requires support by both the access point and end-user devices, older or simpler clients that lack support still indirectly benefit from the technology, similar to how the technology helps on dense and high capacity networks. Again, when supported devices are served simultaneously, there’s more free airtime for other devices to be served. This applies whether it’s more multi-antenna devices or single-antenna devices. Generally, when devices are served quicker, the higher the data rates you’ll see. This is why unsupported devices can still see increased throughput.
MU-MIMO provides an indirect security benefit. The way the data is encoded when sent from an access point to a device prevents other devices, even those connected to the same access point, from reading the packet’s actual contents, including any sensitive data. Any eavesdroppers performing packet capturing of MU-MIMO transmissions will see limited identification details, such as the MU Group, modulation used, and client MAC address. Remember, MU-MIMO only works on the downlink. Any eavesdroppers can certainly still see unencrypted packets flowing from MU-MIMO devices to the access point. However, any security improvement is welcomed.
It’s coming soon
We’re already starting to see the first MU-MIMO devices shipping, such as the Linksys EA8500 router and Acer Aspire E-series laptops. Through the rest of the year, we should see more products supporting the technology as well, such as business-class access points and smartphones. According to Qualcomm, one of the largest wireless chipset manufacturers, they actually started including the technology in mobile devices starting in 2013, now requiring just software updates to activate.
The telecommunications industry is looking for new frequencies in which to operate a new generation of mobile networks
If operators are to build 5G mobile networks with download speeds at 10Gbps and above, they are going to need a lot more spectrum, but getting it won’t be easy.
The amount of spectrum allocated to 5G will determine how fast networks based on the technology will eventually become. Until recently, only frequencies below 6GHz have been considered for mobile networks, mostly because they are good for covering large areas. But there’s a growing need to unlock new spectrum bands in the 6GHz to 100GHz range, too, attendees at the LTE and 5G World Summit conferences in Amsterdam heard this week.
The use of spectrum in these bands is immensely important for 5G networks to be able to offer multiple gigabits per second, Robert DiFazio, chief engineer at wireless R&D company InterDigital Communications, said. By raising communication speeds, they are also expected to help lower latency in mobile networks.
Even though spectrum from 6GHz to 100GHz won’t be used in cellular access networks for at least another five years, vendors are keen to show they can handle all the technical challenges those frequencies present. The development of WiGig, which uses the 60GHz band, has already shown that using such high frequencies works, and on the show floor in Amsterdam, Huawei Technologies and Samsung Electronics both talked up pilot studies of other technologies they have conducted.
For the potential of spectrum above 6GHz to be realized, a new generation of antennas that are capable of directing multiple beams of data to different users at the same time will be needed. New systems will likely also need new modulation schemes to encode the data on the radio waves more efficiently.
There are ways for mobile networks to increase download speeds using existing spectrum, including using carrier aggregation or sharing spectrum with Wi-Fi networks. But at the end of the day, none of these options come close to the potential that as-yet-unused frequency bands above 6GHz offer. There is nowhere else to go but up, according to Samsung.
Rolling out networks isn’t just about hardware and software. Regulators also have their say.
“We have made clear our intention to make large quantities of spectrum available in these frequencies, which is increasingly also the view of other regulators around the world,” said Andrew Hudson, director of spectrum policy at British regulator Ofcom, who spoke on the subject on Thursday in Amsterdam.
The current focus of Ofcom’s work isn’t whether to make spectrum available, but how to identify the best spectrum in this range. This involves finding bands with a combination of good physical characteristics and good prospects for international harmonization, while taking into account current use, according to Hudson.
A final decision on what, if any, bands will be allocated isn’t expected until 2019.
After technical and regulatory challenges have been overcome, the networks also have to be rolled out. If extreme speeds are the upside of frequencies over 6GHz, poor coverage is the downside. These high frequencies don’t have good reach and aren’t very much use if you want to penetrate walls. To get around these weaknesses, mobile operators will have to install lots of smaller base stations — but finding enough places to put even the current generation of small-cell base stations has already proved difficult.
So taking full advantage of spectrum bands above 6GHz won’t be easy, but if equipment and device vendors want 5G to become something more than an incremental upgrade over the LTE networks that exist in 2020, all technical and political challenges have to be overcome.
The first commercial networks using 5G technologies are expected to go live in 2020 but will initially use spectrum below 6GHz because the infrastructure is already out there for those bands, according to DeFazio. Networks using the new frequency bands will only arrive later.
Another Ubuntu phone, another unusual launch. After the BQ Aquaris E4.5, which debuted with a series of online flash sales, Canonical is following up with an invite-only handset built by Meizu. Yep, the same Meizu that once hoped to release an Ubuntu phone in 2014. The new MX4 “Ubuntu Edition” has been available to developers in China since May, but starting tomorrow you’ll be able to order one in Europe too. At least, you will if you’re lucky enough to receive an invite. Canonical and Meizu aren’t revealing how many will be available each day, so you’ll just have to visit their teaser site, complete the “origami wall” and hope for the best. The company is also staying tight-lipped about whether the invite system will eventually be dropped and if the MX4 will later be sold in other markets.
Just like the Aquaris E4.5 and E5, the €299 ($345) MX4 is a modified version of an existing Android handset. It boasts a sharp 5.36-inch display, an octa-core MediaTek 6595 processor, 2GB of RAM and a 3,100mAh battery. For photo-fiends there’s also a 20.7-megapixel rear-facing camera and a 5-megapixel selfie snapper. On paper it’s a competent mid-range handset, but there’s little here to grab the attention of power users.
At MWC we were a little underwhelmed by the device, especially in comparison to the ambitious Ubuntu Edge. Canonical has been slow to develop its software and what was once an intriguing platform is now up against Android Lollipop and iOS 8 — not to mention their fast-approachingsuccessors. Some of the ideas around Scopes — categorised home screens that aggregate content from multiple sources — feel fresh and unique, but it’s hard to see how they’ll appeal to anyone beyond the hardcore Ubuntu crowd. Canonical seems to have accepted this, as it’s calling tomorrow’s launch a “journey” rather than a “day one volume play.” Maybe the company is wise to keep its expectations in check, but after two and a half years we had hoped the platform’s launch would pack a little extra punch.
Wireless charging is handy, but slow. To help change that fact, the Wireless Power Consortium (WPC) has announced the latest Qi specifications, allowing wireless charging pads to deliver more power to your handset.
In the announcement, WPC says “several manufacturers already offer wired fast charging for their devices, providing as much as 60 percent charge in as little as 30 minutes. The latest Qi specification empowers them to extend this speed to wireless charging as well.”
This new standard also gives approval for new test procedures and tools to verify fast wireless charging, and verifies the specification is backwards compatible to existing chargers.
When iOS 9 makes its public debut this fall, Apple will allow developers to release apps designed exclusively for 64-bit iOS devices. This means we’ll begin to see titles that don’t support older iPhones, iPads, and iPod touches released before 2013.
Developers are already building apps and games that only support certain iOS devices; many high-end titles just don’t run well on older hardware, and so blocking those devices prevents users from purchasing and installing software they cannot use.
But for the first time with iOS 9, developers can choose to exclude devices with 32-bit processors. That’s anything released prior to the iPhone 5s, which was the first device to feature the A7, Apple’s first 64-bit mobile processor.
That means all iPod touches and any iPad released prior to the first iPad Air could be blocked from installing certain apps. According to 9to5Mac, incompatible titles simply won’t appear when you browse the App Store.
Many developers are now building apps that support both 32-bit and 64-bit processors, but the latter are much more powerful, and therefore capable of running more sophisticated software — such as console-quality games with high-quality graphics.
It’ll likely be some time before developers start blocking 32-bit devices, but it will happen eventually as those devices get older and older, so it may be time to start thinking about an upgrade if you’re still rocking an old iPhone, iPad, or iPod touch.
With 4G LTE connectivity, stable wi-fi, and all the benefits of fast and accessible data pipes, mobile developers have the luxury of always-on and always-available data. While that’s great for the North American and Western European app market, as new territories become more immersed in smartphone culture, new opportunities open up, those opportunities will come with unique challenges. One of those will be bandwidth and connectivity.
Ericsson’s Mobility Report for June 2015 (link to PDF) projects that in 2020, there will be 3.7 billion LTE subscriptions, taking second place to 3.8 billion WCDMA/GSM subscriptions. While LTE subscriptions will continue to grow for the next five years (likely in line with contract renewals in the US and EU) the push into new markets, notably in the BRIC region, will be a mix of LTE and WCDMA/GSM handsets.
Price will be a key consideration in the continued rise of WCDMA/GSM numbers. The initial hardware cost is expensive, and these markets are not as generous as the subsidies the likes of AT&T and Verizon can offer. That means there is a clear differential in handset price. The monthly line rentals will also be cheaper when 4G LTE is not added to the bundle. 4G LTE also requires infrastructure to be present. If there’s no 4G coverage, there’s no need to buy the 4G data plan.
There’s no quick fix to this, and as new markets come online and are opened up to third-party developers, the realisation that throwing buckets of data at a problem will not offer a suitable solution will become apparent. That means the transfer of data from an application to a server should be limited, synchronisation should mimize data used, and any graphical resources should be stored locally.
Apps should also be designed to assume that data is not always present, and there should be as graceful a fallback of functionality as possible. And if your revenue is built around rich in-app advertising, you might need to rethink your plans because streaming video down to a handset over a 3G connection (which likely has a low monthly data cap) will not endear you to your customers. Developers should always be ready to optimize for the customer base – that could easily mean optimize for 3G connectivity and no more in the future.
Any development process is about making best use of the resources available, and mobile app development has had to deal with resource limitations for many years. The recent rise of 4G has meant that data and bandwidth has fallen down the list of considerations, but the explosive growth in smartphone cover the next five years is about to propel it back up the table.
Welcome (back) to the mobile world where bandwidth is the limiting factor.
Last year, Google introduced an elegant, sophisticated operating system with an all-new design that was compatible with everything from phones to watches to cars to TVs. This year, with Android M, it’s refining it.
The year-over-year cycle of innovation followed by refinement isn’t new to anyone familiar with tech, but increasingly it’s tough to say just what Google’s next refinements ought to be. There’s the usual checkboxes: improve battery life, clean up some settings, buff out the rough spots. After that, you expect a headline, a brand new service or feature — like, say, an improved mobile payment system. Small or large, we’ve come to expect these annual upgrades.
This year’s M update appears small, but it’s actually fairly large. It comes down to answering this question: how do you make a smartphone do more without making it more confusing?
Google’s answer is to make it smarter.
The biggest and most important development in Android M is the introduction of “Now on Tap.” It’s an evolution of Google Now, which extended Google Search into a service that automatically guessed what information you wanted to know. Swipe up on the home button, and you see a series of (hopefully) relevant information cards. Now, Android M is bringing that experience to every single app on your phone.
In the three years since Now was first introduced, Google has extended the data sources it pulls from. As a result, Now has gotten smarter, using contexts like your location, calendar, inbox, recent searches, and other “ambient” data to better guess what you want to know.
It’s really impressive. It’s also a little scary
Android M offers Now another data source: hold down the home button in any app and Android will read the screen and use the information to create relevant Now cards. Aparna Chennapragada, director of product engineering at Google, walked me through the process. “Think of this as smart copy and paste,” she said. Instead of copying the info you want, opening another app, and pasting it in, just hit the home button and trust Now to do the rest.
For Sundar Pichai, extending Google Now into apps is simply part of Google’s “core mission statement, which is to organize users’ information.” He points out that mobile requires faster answers with less work, so Google wants to leverage its ability to use machine learning to understand context and apply it everywhere. “When we think about organizing the world’s information in the context of mobile,” he says, “people are trying to get stuff done and … want it to be easier. So we need to go a step further and be assistive where we can.” Now on Tap is precisely that assistance.
The demos Chennapragada showed me were compelling. In a WhatsApp chat that mentioned picking up the dry cleaning and going to a restaurant, holding down the home button popped up cards that let you automatically add a reminder for the laundry and an info card for the restaurant, complete with reviews and a map.
It works with essentially any app that displays text on screen (and, in the future, should also be able to recognize landmarks and images). You can also initiate the program by either holding down the home button or by saying “Ok Google,” and asking contextually aware questions about what you’re currently doing. Say you’re listening to music — you can ask Google to look up who the lead singer of the band is. And it works. It’s really impressive.
Now Google can learn about the stuff I’m doing inside non-Google apps It’s also a little scary.
Using my search history, Android location history, Gmail account, calendar, and god knows what else, Google already knows an unimaginable amount of information about me. Now it can learn about the stuff I’m doing inside the non-Google apps I’m using on my phone. I don’t mind entering text into a search box about a restaurant, but that is a discrete disclosure with limited info. Google probably doesn’t know who I’m talking to about said restaurant — but with Now on Tap, it does.
There are at least a few constraints in place. First of all, it’s an opt-in service, just like Google Now. Secondly, Chennapragada says it only searches for information when you ask for it; it’s not constantly scanning what you’re doing. Last but certainly not least, she says that “we don’t store the data. We discard the data.”
Google is most at home on the web — that’s where the company got its start, and where it still functions best. But, it’s impossible to ignore that we’re increasingly living our online lives inside apps. Is Now on Tap a blatant effort to help Google fill a large and growing digital data blind spot — namely all the information locked inside apps? “Look, there’s a huge wealth of information in apps and it’s not just the size of information, it’s actually different kinds of information,” Chennapragada says. But she cites a more user-centric motivation for the feature, brushing aside those larger strategic concerns: “The way we’ve been actually thinking about it is: how do we understand apps so that we can actually make them accessible to users?”
Is Now on Tap a blatant effort to help Google access all the information locked inside apps?
Among the things that Now on Tap can surface are direct, deep links into apps instead of web pages or Google services. If Google thinks it sees a restaurant name, for example, it will provide icons for apps like Yelp or OpenTable, so you can jump right to making a reservation. Chennapragada says that enough app developers have made their data searchable to add up to 30 million links in Google’s index.
Now on Tap is based on an Android-platform level service called the “Assist API,” which means that in theory, any app developer can create a service that makes use of the data displayed on the screen. It’s Android that reads the screen, not Google, though for most users the data will obviously go to the search giant. But the fact that Google chose to keep its own payment service abstracted one level away from the core OS is a sure sign that it’s thinking about China and making accommodations for manufacturers to develop similar services over there. On the topic of China, Pichai didn’t indicate any major strategy shift in the offing, but hinted that Google is still thinking about it: “We would love to serve Chinese users with Google services as well, obviously. I think it will be a privilege to do that, but we need to be thoughtful in how we do it. We are open to newer approaches. We’ll have to wait and see.”
But for the rest of the world, Now on Tap will be a very “Googley” product, taking full advantage of Google’s cloud computing services like the Knowledge Graph, making increasingly good guesses about what you want to know, and probably encouraging more app makers to make the data inside their ecosystems available for search indexing. It’s an ambitious mission — the sort of thing that only Google would be able to pull off, and maybe only Google would even try in the first place.
Looking for information in the desktop era, Chennapragada says, was defined by the search box. But in the mobile age, that small, white text box appears increasingly archaic. If Now on Tap is a sign of things to come, the conceptual differences between your phone and the information it accesses will gradually erode. Your phone isn’t just a thing that can access the internet, it’s increasingly becoming a thing that is a part of the internet.
Hiroshi Lockheimer, VP of engineering for Android, puts it in simpler terms: “What we’re focused on with M is really the core user experience and improving that.” (“M,” by the way, is how everyone refers to the next version of Android — nobody there will cop to knowing what dessert it will be named after.)
When Lockheimer talks about the “core” user experience, he’s clearly talking about the kinds of refinements we’ve come to expect. “We’re really going to start harvesting all the effort we put in [to Android Lollipop],” he says.
“We think it’s important that app developers are able to ask for permissions to do things in context.”
A prime example is Google’s approach to app permissions. Until now, if you wanted to install an app from the Google Play Store, you had to accept a giant stack of often arcane and scary-sounding things before you could even download the app. With M, a developer will be able to ask you if you want to grant it access to specific features like, say, the camera. It’s an approach similar to Apple’s iOS, and it’s the sort of thing Google should have adopted long ago. “We think it’s important that app developers are able to ask for permissions to do things in context,” says Lockheimer.
Google’s developer preview indicates that users can can get really granular on those permissions — turning off individual functions in individual apps, and monitoring which apps have access to any given system function.
There are lots of other small tweaks like that throughout Android M: the app drawer has big letters to aid navigation; recently accessed apps float to the top of the drawer; you can properly silence your phone again; and cut-and-paste has once again been tweaked. My favorite small refinement is in the share menu. Now, when you tap the share button you’ll see contacts up top: so if you happen to use WhatsApp to send links to a particular person, you can do that directly instead of choosing WhatsApp first and then hunting for that person.
Both Now on Tap and the share menu tweak share a common theme: Android is continuously guessing what you want, and giving it to you without asking. Increasingly, Google is growing more confident in its ability to guess correctly. That confidence is expressed in another feature in Android M, called “Doze.”
If Google gets Doze right, users will never know it’s there
Doze is a new kind of aggressive battery management algorithm — new for Android, anyway. Android will take a look at a variety of signals that indicate whether or not you’re actively using your device. If it detects your tablet sitting unused on the coffee table all day, it will turn off certain power-hungry apps and even deny them networking abilities. If Google gets Doze right, users will never know it’s there. “They shouldn’t even have to think about that,” Lockheimer says, “it should just work.”
Apple has long taken a similar approach inside iOS. Apps can run in the background on the iPhone, but only within strictly defined policies from the OS that limit their capabilities and their access to data. Google is coming to a similar solution but from an entirely different direction. Instead of setting those policies explicitly, it will algorithmically and automatically apply them depending on how “fresh” it thinks you need your data to be.
Google refused to estimate how much Doze will improve battery life, but Lockheimer says that the company has “internal targets” that are “pretty audacious.”
The last big Android feature that Google is trotting for I/O is Android Pay. It’s actually not an Android M feature, but instead available via the Google Play store to any Android phone with 4.4 Kit Kat or higher and an NFC chip. Product Manager Pali Bhat says that “seven of 10 [Android] phones in the United States are now ready for Android Pay.”
The history of mobile payments — especially on Android — is almost unimaginably complicated
It involved different technologies, competing corporate interests, product launches, blocked apps, broken partnerships, and generalized skullduggery. It’s the story of Google Wallet, which was announced four years ago and has utterly failed to gain traction through management changes and product pivots, having been rejected by cell phone carriers who served as gatekeepers for what software could be preinstalled on the phones that used their networks.
By comparison, the story of Android Pay is remarkably simple: Google bought a company called Softcard, made nice with the carriers, and rebranded. People understand “Apple Pay” to mean “paying with your phone,” so using “Android Pay” is an easy analog.
Android Pay isn’t a one-for-one replacement for Google Wallet, which in addition to mobile payments also handles things like peer-to-peer payments and online checkout. But Android Pay is a catch-all for the Android platform pieces necessary to support it, including host card emulation, tokenization, and a bunch of technologies Google calls “Safety Net” that monitor the device to see if it’s been compromised. Google is also adding support for fingerprint readers to Android, so manufacturers like Samsung or HTC don’t have to do it themselves.
“We absolutely don’t sell that data and we have no plans to use [it] for advertising or anything like that,” says Lockheimer. Bhat notes that there are cases in which Google will collect transaction data — but the company will limit that data use to displaying recent transactions. A Google spokesperson told me that it won’t discuss the details of who gets a cut of the transaction fees that are usually paid to banks and credit card providers. (Apple reportedly gets as much as 0.15 percent.)
700,000 vendors are already signed up to accept Android Pay
In terms of its basic operation, Android Pay has few surprises. You’ll need to set up a lock on your phone; when your phone is unlocked, you just tap and pay. Bhat tells me that Android Pay will automatically figure out how often you need to re-enter your passcode if it’s been unlocked for an extended period of time. Softcard technology gives Google the ability to transmit both loyalty cards and payment with a single tap, and tokenization means that your actual credit card appears inside the app, just as with Apple Pay. It will also work with some in-app purchasing.
But the problem with Google Wallet was never how it worked — it was where it worked. It was never on enough phones or supported in enough stores to make an impact. With roughly 700,000 vendors already signed up to accept Android Pay, adoption this time around should be less of a hurdle.
Over the course of the weeks leading up to I/O, Google has framed its Android M innovations in general terms: “improving the core customer experience” and “really focusing on product excellence.”
But the real story here is far more concrete than the company is letting on: Google wants to make your phone smarter in order to give it a better shot at doing just what you want it to do. That extends to battery life when it shuts down apps you’re not using. It extends to Pay when it uses “a thousand-plus signals” to know whether a transaction is legitimate or not. It even extends to the Share menu when it saves you three or four taps.
But more than anything else, Google is extending its computer intelligence to apps through Now on Tap. The ability to automatically get and use information you see inside an app begins to break the silos each one has been quietly building. Once you break the barriers between apps, mobile computing takes on a whole new silhouette.
Think of the amazing (and, yes, creepy) things that happen when you put words into a Google search box. Now put your damn phone in there and imagine what might happen. We’ll find out this fall, when Android M exits its developer preview and starts shipping on phones.
Google officially unveiled Android M today from their I/O 2015 conference today.
Android M is the next major Android update due out in the third quarter of this year, which is mostly about improving the experience set by Android 5.0 (Lollipop) last year. Android M will bring Android Pay as an upgraded version of Google Wallet and to compete with Apple Pay, a brand new App Permissions feature, a Doze deep-power-savings state primarily for benefiting Android tablets, and various performance/efficiency/reliability improvements over Android L.
Android M will also have an improved Chrome web-browser, the Android Intents built-in app linking system will see improvements, and there’s also native USB Type-C support.
While Android M won’t be officially unveiled until Q3, Google is making a developer preview of Android M available today for the Google Nexus 5 / 6 / 9 devices along with Nexus Player.
Google SVP Sundar Pichai may have tipped the company’s hand on mobile payments back in Barcelona, but he offered little detail on how the system would work. At I/O 2015, though, the folks in Mountain View served up a wealth of details on the matter, including the announcement that Android Pay would be part of the Android M release. Just like Apple Pay, transactions are sorted via NFC and your actual card number isn’t shared with merchants. Instead, it’ll use “a virtual account number” to handle payments. When it arrives, the system will be employed by over 700,000 retailers (sounds familiar) like Macy’s, Whole Foods, Walgreen’s and many more. It’ll also be used for in-app purchases, so if you’re ordering food from Chipotle or paying for an Uber ride, you’ll be able to use Android Pay there as well. And yes, web sellers can leverage the system, too.
In terms of security, the payment tech will employ your phone’s fingerprint scanner — if it has one — to pay for items from a linked MasterCard, Visa, AmEx or Discover card. What’s more, Google says it’s still working on expanding the list of banks that support Android Pay and with AT&T, T-Mobile and Verizon to make sure that when you buy a new device, it’ll be ready to work with the system out of the box. There’s no mention of what’ll happen to Google Wallet just yet, but reports surfaced yesterday that it would handle sending money between individuals as the folks in Mountain View completely overhaul Android phone-driven payments. That same report mentioned loyalty programs being lumped into Android Pay, but there hasn’t been any talk of that yet either.
A major theme of Windows 10 is the consistent experience provided across various devices and screen sizes, from phones to tablets to full-blown desktop PCs. Microsoft even wants to make some of the Windows 10 universe available to those who aren’t using Windows Mobile (that is to say, most of us). To make this possible, Win10 will include a Phone Companion app to help Android and iOS users set up the right Microsoft cloud services on their devices. Microsoft will also release a version of its Cortana personal assistant for those operating systems.
Based on the screenshots and video Microsoft has provided, it looks like the Phone Companion app will basically walk owners of non-Microsoft phones through the process of setting up apps like OneDrive, Office, and Xbox Music on their respective operating systems.
Through the magical power of the cloud, pictures taken on iOS or Android will then be automatically uploaded to OneDrive, making them accessible through Windows 10’s Photos app. Music stored in OneDrive will be available for playback on any device with the Xbox Music app installed. Office apps will pick up files stored in OneDrive, and notes made through Cortana and OneNote will automatically sync across devices, as well.
Speaking of Cortana, a standalone version of Microsoft’s digital assistant will be coming to iOS and Android, too. Though it won’t be as deeply integrated into these operating systems as it is in Windows Mobile, Cortana will still be able to provide reminders, take notes, and answer questions about subjects like the weather. Folks without Windows Mobile devices won’t be able to invoke the assistant by saying “Hey Cortana” or toggle settings with voice commands, though.