Software & Services – Michmutters
Categories
Technology

Can Your iPad Run Apple iPadOS 16?

Apple’s next-gen operating systems are currently in public beta, meaning early adopters can test iOS 16, iPadOS 16, and macOS Ventura on their personal devices before a final fall release.

Naturally, OS updates mean that some older hardware gets phased out of the support cycle. However, iPadOS 16 is unique in that some features are exclusive to Apple tablets equipped with the Apple M1 chip. So, which iPads will run iPadOS 16, and how can you tell which versions have the M1 chip? We’ll break it down for you.

stage manager

Stage Manager

Among the features exclusive to devices with the M1 chip is Stage Manager, Apple’s new window-focused multitasking tool. There are two ways to see if your iPad runs an M1. First, check the model number on the back of your device. Currently, the devices utilizing the M1 chip include:

  • The 5th-generation iPad Air, introduced in 2022. It comes in 64GB and 256GB iterations. The model numbers are A2588, A2589, or A2591.
  • The 5th-generation iPad Pro 12.9-inch, introduced in 2021. It comes in 128GB, 256GB, 512GB, 1TB, and 2TB versions. The model numbers are A2378, A2461, A2379, or A2462.
  • The 3rd-generation iPad Pro 11-inch, introduced in 2021. It comes in 128GB, 256GB, 512GB, 1TB, and 2TB flavors. The model numbers are A2377, A2459, A2301, or A2460.

Alternatively, you can go into your iPad’s Settings menu and tap General > About. You should see the iPad’s details at the top of the screen, including the model name. If you have an iPad Air (5th generation), iPad Pro 12.9-inch (5th generation), or iPad Pro 11-inch (3rd generation), then you have an M1 device that can utilize virtually all of the iPadOS 16 update features.

about menu on ipados

To clarify, we say virtually due to the Reference Mode feature that offers a unique, color-accurate screen mode via the XDR display. It’s ideal for people who do color work, such as 3D modeling, painting, and photo editing. But it’s exclusive to the iPad Pro 12.9-inch (5th generation).

12.9‑inch iPad Pro can now display reference color.

If you’re not too concerned with Stage Manager or the M1 chip, and just want to know which iPad models get iPadOS 16 this fall, here is a breakdown. At the time of this writing, iPadOS 16 is compatible with the following devices:

  • iPad Pro (all models)
  • iPad Air (3rd generation and later)
  • iPad (5th generation and later)
  • iPad mini (5th generation and later)

Check the Settings menu using the steps mentioned above to see which iPad you own.

Apple’s OS public betas are currently live, and enrolling your device is a cinch. These updates feature excellent improvements, but if you’re a newbie, or don’t have an extra device that can load up the betas, wait for the stable versions of these OSes, which arrive this fall.

For more, read our impressions of iOS 16 to get a gist of what’s coming to the iPhone; MacBook users should take a peek at our macOS Ventura preview, as well.

.

Categories
Technology

Are You Being Followed? Use a Raspberry Pi to Find Out

In the movies, a hero can always tell he’s being followed because the goons tasked with following him never blend in. In real life, figuring out if someone is tailing you is much trickier, and can be a matter of life and death. At the Black Hat security conference, a speaker demonstrated a low-cost device that looks for the tell-tale wireless signature of bad guys on your tail.


Watch Your Back

Matt Edmondson, who works with the US Department of Homeland Security, was approached by a friend from a government agency and declined to name onstage at Black Hat. This friend worked with confidential sources, and one in particular had links to a terrorist organization. Edmondson’s friend was concerned that if they were followed after meeting with the confidential source, his friend’s government connections could be discovered and the source put in danger.

The traditional spycraft method of surveillance detection, Edmondson explained, is to change your route and see who does the same—such as exiting the highway and then getting back on again. “It’s really obvious the [Washington, D.C.] Beltway was designed as a surveillance-detection route,” quipped Edmondson, perhaps joking, perhaps not.

Edmondson said his friend asked if he could revisit an idea he had discussed years ago: Using network-detection technology to scan for devices that were following you.

Even if you’re being tailed by a nation-state-backed surveillance team, “isn’t there still a really good chance they have a phone in their pocket?” asked Edmondson.


Tattletale Devices

This works because so many of our devices are constantly trying to communicate with other devices and various wireless networks. Many mobile devices, for example, are constantly seeking familiar wireless networks to connect to. Other devices, such as AirPods, Bluetooth speakers, laptops, and so on, can be similarly chatty.

All those wireless conversations can be easily detected. If the same devices are in your vicinity repeatedly, Edmondson reasoned, it’s likely you’re being followed.

At PCMag, we’ve looked at similar devices before. The PwnPro was a multi-thousand-dollar device with sophisticated backend software that could monitor devices within 1,000 feet. It, too, could identify specific devices and usage patterns, but was far from affordable or portable.


SimpleComponents

To build a device that could scan for wireless communications and alert you when such a device stayed in your vicinity, Edmondson set out to use low-cost materials, and settled on the Raspberry Pi single-board computer. “How many of us have multiple Raspberry Pis sitting in your closet doing absolutely nothing?” Edmondson joked.

Add to that a low-cost touch screen purchased off Amazon, a portable power bank, and a USB wireless adapter (Alfa AWUS036ACM), and Edmondson was off and running.

Screenshot of a livestream, a man is smiling on the left side while the right is a PowerPoint slide showing a Pelican Case full of electronics.

A view of the ‘minimum viable product’ version of Edmondson’s detection device.

Scanning duties on the device would be handled by Kismet, a free and open-source wireless monitoring tool. Kismet scans the airwaves and records its findings in an SQLite database. “Everything else is shoddy python code,” said Edmondson.

Users interact with Edmondson’s device via the touch screen and a custom interface Edmondson described as “literally the worst user interface you’ve ever seen.” It consists of several large, gray buttons, which are intended to be easily pressed while driving. For this task, Edmondson explained, “you don’t want a nice interface designed by Apple, you want something designed by Fisher-Price.”

Once activated, Edmondson’s device compiled data on the surrounding devices into lists broken down by time. If the device detects something that already appears in the list from 5-10 minutes ago, or 15-20 minutes ago, that’s a sign someone might be on your tail.


A Few Challenges

There were still some challenges with the device, however. First, Edmondson needed to build in a mechanism where detected devices could be added to an ignore list. That way, trusted devices wouldn’t trigger an alert.

A slide from a PowerPoint presentation showing a black Pelican case full of foam and several electronic components neatly arranged

Edmondson’s presentation showed a better, more neatly arranged version of his device.

During a field test in the Arizona desert, Edmondson discovered another problem: MAC address randomization. This is a security feature of many modern devices, where wireless requests are sent with a random, spoofed MAC address.

Edmondson’s solution was to also look at what Wi-Fi networks devices were asking for. If the same Wi-Fi network request appears again and again, that probably means a single device is nearby. Edmondson said that this could possibly be expanded upon, since tracing the location of the requested Wi-Fi networks could tell you where the device had been previously. Even the requested Wi-Fi network name could contain clues. Edmondson said he also wanted to add a GPS component, so it was possible to see where a potential follower first appeared.

In his talk, Edmondson didn’t reveal whether the device was ever practically put to the test, or what became of his friend’s informant. He did, however, bemoan the lack of similar detection technology. “There’s so much technology out there to stalk on people and invade their privacy and very little to protect yourself,” he said.

Keep reading PCMag for the latest from BlackHatBlackHat.

.

Categories
Technology

Subverting Deep Security in Windows

I picture a scene from a heist movie. The bank boasts of its new, ultimate security force inside the locks, walls, and lasers. And the heist crew looks for ways to subvert that system. Can we slip one of our people into the defense force? Use bribes or threats to compromise a guard? Maybe just find a guard who’s sloppy?

While it’s a lot more technical, finding a technique to subvert the Early Launch Antimalware (ELAM) system in Windows, as described by Red Canary’s principal threat researcher Matt Graeber in his Black Hat briefing, it is similar to that scenario.

Graeber explained that an ELAM driver is secured against tampering, and it runs so early in the boot process that it can evaluate other boot-time drivers, with the potential to block any that are malicious. “To create this driver, you don’t have to implement any early launch code,” Graeber explained. “The only thing you need is a binary resource with rules that say which signers are allowed to run as Antimalware Light services. And you have to be a member of the rather exclusive Microsoft Virus Initiative program.”

“I had to investigate how the rules are implemented,” said Graeber. He then described just how he analyzed Microsoft Defender’s WdBoot.sys to determine the expected structure for these rules. In effect, each rule says that any program signed with a specific digital certificate is allowed to run as an Antimalware Light service, which affords it serious protections.

It’s not possible to swap in an unapproved driver, since each must be Microsoft-approved. And anti-tampering constraints mean it’s equally impossible to subvert an existing driver. “ELAM is an allowlist for Antimalware Light services,” Graber mused. “What if it’s overly permissive? Does there exist an ELAM driver that may be overly permissive?”


A Grueling Search

Graeber relied on many resources in his search for a lax driver, among them VirusTotal Intelligence. You may be familiar with VirusTotal’s free malware check, which lets you submit a file or a hash and have it checked by around 70 antivirus engines. VirusTotal Intelligence provides much broader access to detailed information about just about every file and program in existence.

“Hunting for ELAM drivers, I got 886 results from VirusTotal,” said Graeber. “I filtered the list to validate results and got it to 766. I identified many vendors with ELAM drivers, some of them odd.” Here, Graeber showed a list that included one blank vendor name and several that looked incomplete. “If some of the vendors are odd, maybe there’s one rule set that’s odd.”

In the end, he discovered five certificates from four security companies that, as he hoped, provided a way to subvert ELAM. Without going into detail about certificate chains, I have determined that any program with one of these in its certificate chain could run in the protected Antimalware Light mode. All he had to do was cross a list of such programs with VirusTotal’s list of malware to get a rogue’s gallery of malicious programs with the potential to run protected.


How to Weaponize This Weakness?

At this point, the talk stepped off the technical deep end. Graeber described searching the LOLbins for an abuseable executable, coming up with a suitable version of Microsoft Build, and getting past various obstacles to let him run arbitrary code. I’m sure the bright programmers in the audience were nodding along in admiration.

After a live demo, Graeber noted the possibility of various payloads. “Your own malware is protected, and you can kill other protected processes,” he said. “We effectively killed the Microsoft Defender engine in the demo.” The code is public, though Graeber mentioned that “I had to change some filenames to protect innocent vendors.”


How to Detect and Mitigate This Attack?

“This is abusing the features of ELAM, not a vulnerability,” said Graeber. “I can’t begin to speculate why any of those certificates would be allowed. Shame on Microsoft! Let’s hope for a robust fix in the future. Vendors, I’m not shaming any of you here. I don’t even blame vendors for the overly permissive drivers, since Microsoft allowed them. I encourage any vendor to audit the rule sets of your signed ELAM drivers. You wouldn’t want to be the one who ruined the entire ecosystem.”

Graeber does hold out hope for a fix. “I reported this to Microsoft in December of 2021,” he said. “They acknowledged the issue, and the Defender team really owned this. They’ve taken it very seriously and sent notification to Microsoft Virus Initiative members. If you’re a member, you already know.”

He concluded by offering resources for other researchers to duplicate his work. That might sound like he’s putting weapons in the hands of malware coders, but fear not. Graeber supplied the framework for further investigation, but anyone trying to use it will have to duplicate his search for a permissive driver and an abuseable payload.

Still, the picture of malicious software taking over the secure bunker that ELAM provides and killing off the defending programs is alarming. Let’s hope the security community, Microsoft in particular, comes up with a defense quickly.

.

Categories
Technology

Your Macs Aren’t as Secure as You Think

When the Macintosh computer was new, Apple touted the fact that Macs, unlike PCs, didn’t get viruses. We know better now; Macs do get hit with malware, even ransomware. But the fact remains that macOS is intrinsically more secure than Windows. That’s why security researcher Thijs Alkemade’s claim to break through all macOS security layers with one attack is such a gut punch. An excited audience of Black Hat conference attendees, both in-person and virtual, clamored to hear details about this surprising claim.


What Makes MacOS So Secure?

“I’ve been a Mac user all my life,” said Alkemade. “It’s a system I know well. The early Mac platform was based on Unix. In that platform, users are security boundaries but processes are not. For files, every file has an owner, and nine flags define permissions. The root user has full access to modify all files, memory, even the kernel. That was the old model.

“System Integrity Protection (SIP) was introduced in 2015 with El Capitan,” he continued. “It put a security layer between the root users and the kernel, protecting the system from modification even by the root user. Root access is no longer enough to compromise the system. One of the other names for this system is rootless. Some people think it means Apple is going to take root away, like on the iPhone. But actually it just means that root es less powerful. Dangerous operations require entitlements, and each macOS release adds more and more restrictions.

“But…macOS is old, large, and established,” said Alkemede. “A lot of system parts were written before the security model changed. It’s not possible to reconstruct the entire system.”

I have listed off several techniques that could be used to enable process injection, but concluded they’re just incidental. “It’s much nicer to have process injection that you can apply everywhere.”


Where’s the Security Hole?

Where’s the weakness? Alkemade didn’t keep listeners in suspense. “It’s in the saved state feature,” I explained. “When you shut down, you check a box if you want an app to reopen when you start again. It even restores unsaved documents. It largely works automatically. Developers don’t have to do anything to use it, but they can extend it.”

The process of saving an app’s state is called serializing, and the serialized data is meant to be encrypted. However, encryption is not required, which allows a clever programmer to abuse this feature. “I create a saved state using a malicious serialized object and write it to the directory of another application’s state. It automatically deserializes and executes within the other app, and can use the entitlements and permissions of that other app, achieving process injection.”

Alkemade walked the audience through the numerous barricades he encountered, and the techniques he evolved to circumvent them. He did admit, “I have to skip a few steps for time reasons and disclosure reasons.” I won’t attempt to explain the details here, as you need to be a programmer to totally grasp them. The key point is, it worked.


What Can You Do With Process Injection

Alkemade detailed three possible uses for the exploit: escape the sandbox, escalate privilege, and bypass System Integrity Protection.

These are extraordinary claims, given those outcomes are practically the Holy Grail of hacking. Bypassing SIP in particular gives your program supreme power. “We can read email or Safari history of all users, or grant ourselves permission to use the microphone or webcam,” Alkemade explained. “Our process is now protected by SIP, which gives it powerful persistence. We can load a kernel extension without the user’s knowledge or permission.”

Alkemade proceeded to demonstrate these three hacks for the appreciative audience. Only the best Black Hat demos get their own round of applause!


Should We Worry?

This security hole is already fixed in macOS Monterey, but app developers need to do their part. “Developers can and should make apps accept only secure serialized objects,” said Alkemade. “Apple has already done that with all their apps, but existing third-party apps need to do the same.”

As it turns out, this new protection isn’t just for Monterey. “I just learned that they back-ported it to Big Sur and Catalina,” said Alkemade. “The Catalina release notes are updated, but not those for Big Sur. I got a spontaneous email from Apple asking to share the contents of my talk in advance. Two hours ago I got confirmation that it’s fixed in Big Sur, though I haven’t had time to verify it.”

“Apple keeps adding layers to macOS,” concluded Alkemade. “Adding new layers to an established system is hard, so code written 10 or more years ago is today’s attack surface. More layers may not increase the effort for attackers, not if you can use the same bug to bypass all of them.”

.

Categories
Technology

Meta Expands Test of End-to-End Encryption Features in Messenger

Meta is testing additional end-to-end encryption (E2EE) features in Facebook Messenger—and not just because it has been roundly criticized for not enabling these protections by default.

“We’re working hard to protect your personal messages and calls with end-to-end encryption by default on Messenger and Instagram,” Meta says. “Today, we’re announcing our plans to test a new secure storage feature for backups of your end-to-end encrypted chats on Messenger, and more updates and tests to deliver the best experience on Messenger and Instagram.”

The marquee change is the introduction of encrypted backups. Messenger currently stores E2EE messages on a single device; there is no way to access them on another device. (At least in theory.) This can be inconvenient for people who lose their primary device, but if the company had backed up the messages without encrypting them, Messenger users would be at risk.

That isn’t a theoretical problem. Apple uses E2EE for iMessage, but many people choose to back up their message histories via iCloud. That backup isn’t encrypted, so even though the messages rely on E2EE in transit, someone can access those messages via iCloud. Meta avoids that problem with Messenger by restricting E2EE messages to a single device.

Now the company is testing what it calls Secure Storage. This encrypted backup will allow people to recover their messages using the method of their choice—supplying a PIN or entering a generated code—if they lose access to their device. Meta says it will also let Messenger users back up their E2EE messages to “third-party cloud services,” if they prefer.

“For example, for iOS devices you can use iCloud to store a secret key that allows access to your backups,” Meta says. “While this method of protecting your key is secure, it is not protected by Messenger’s end-to-end encryption.” (Which is effectively the company’s way of saying that it’s not responsible if otherwise-secure Messenger chats are accessed via iCloud.)

Meta will start testing Secure Storage on Android and iOS this week. The feature isn’t available via Messenger’s website, desktop apps, or for “chats that aren’t end-to-end encrypted,” though.

The company says it will also “begin testing the ability to unsend messages, reply to Facebook Stories, and offer other ways to access your end-to-end encrypted messages and calls”; test an extension dubbed Code Verify that “automatically verifies the authenticity of the code” on Messenger’s website; and make E2EE messages available to more Instagram users.

But perhaps the most important test will be making E2EE the default for some Messenger users rather than requiring people to enable these protections on a chat-by-chat basis. Meta says:

“This week, we’ll begin testing default end-to-end encrypted chats between some people. If you’re in the test group, some of your most frequent chats may be automatically end-to-end encrypted, which means you won ‘t have to opt in to the feature. You’ll still have access to your message history, but any new messages or calls with that person will be end-to-end encrypted. You can still report messages to us if you think they violate our policies, and we’ll review them and take action as necessary.”

Making the most secure option the default is the best way to encourage people to protect themselves. This has become even more important in a post-gnaws Roe country where law enforcement can—and have—use message histories to build cases against people who’ve had or have sought abortions. (Meta tells wiredwired this rollout wasn’t prompted by those concerns.)

Meta says it “will continue to provide updates as we make progress toward the global rollout of default end-to-end encryption for personal messages and calls in 2023.”

.

Categories
Technology

Researchers Stalk and Impersonate Tracking Devices (for Safety)

At Black Hat 2022, security researchers showed off a new attack that goes after tracking systems built on ultra-wideband (UWB) radio technology. They were able to stalk these tracking devices without their target’s knowledge, and even make targets appear to move at their attackers’ will.

A key use of UWB is real-time locating systems (RTLS), where a series of transceiver stations called anchors track the location of small, wearable devices called tags in a specific area, in real-time. This has a number of applications, from simple tasks like tracking personal items to high-stakes scenarios like infectious disease contact-tracing and factory safety mechanisms.

“Security flaws in this technology, especially in industrial environments, can be deadly,” says Nozomi Networks Security Research Evangelist Roya Gordon.

You may not be familiar with UWB, but it’s familiar with you. Apple has integrated it into mobile devices starting with the iPhone 11, as well as modern Apple Watches, HomePods, and AirTags. It’s also being used in large-scale infrastructure projects, like the effort to drag the New York City Subway signaling system into the 21st century.

Although Apple AirTags use UWB, the systems the team looked at were markedly different.


Standard Loopholes

What’s the problem with UWB RTLS? Although there is an IEEE standard for RTLS, it doesn’t cover the synchronization or exchange of data, the research team explains. Lacking a required standard, it’s up to individual vendors to figure out those issues, which creates opportunities for exploitation.

In its work, the team procured two off-the-shelf UWB RTLS systems: the Sewio Indoor Tracking RTLS UWB Wi-Fi Kit, and the Avalue Renity Artemis Enterprise Kit. Instead of focusing on tag-to-anchor communication, the Nozomi Networks team looked at communications between the anchors and the server where all the computation happens.

The team’s goal was to intercept and manipulate the location data, but to do that, they first needed to know the precise location of each anchor. That’s easy if you can see the anchors, but much harder if they’re hidden or you don’t have physical access to the space. But Andrea Palanca, Security Researcher at Nozomi Networks, found a way.

The anchors could be detected by measuring the power output of their signals, and the precise center of the space found by watching for when all the anchors detect identical signal data for a single tag. Since RTLS systems require the anchors to be arranged to form a square or rectangle, some simple geometry can pinpoint the anchors.

But an attacker wouldn’t even need pinpoint precision; anchor positions can be off by 10% and still function, Lever says.


Attacking RTLS

With all the pieces in place, the team showed off their location-spoofing attacks in a series of demos. First, they showed how to track targets using existing RTLS systems. We’ve already seen mounting concern over malicious uses of AirTags, where a bad guy tracks a person by hiding an AirTag on them. In this attack, the team didn’t need to hide a device, they simply tracked the tag that their target already used.

They also demonstrated how spoofing a tag’s movements in a COVID-19 contact-tracing scenario could create a false exposure alert, or prevent the system from detecting an exposure.

Another demo used a manufacturing facility mockup, where RTLS data was used to shut down machines so a worker could enter safely. By messing with the data, the team was able to stop production at the faux factory by tricking the system into thinking a worker was nearby. The opposite could be more dire. By making it seem as if the worker had left the area when they were actually still there, the machine could be reactivated and potentially injure the worker.


Practical Complications

The good news for owners of these systems is that these attacks aren’t easy. To pull it off, Luca Cremona, a Security Researcher at Nozomi Networks, first had to compromise a computer inside the target network, or add a rogue device to the network by hacking the Wi-Fi. If a bad guy can get that kind of access, you’ve got a lot of problems already.

Unfortunately, the team didn’t have any easy answers for securing RTLS in general. They kludged data encryption onto an RTLS system, but found that it created so much latency as to make the system unusable for real-time tracking.

The best solution the team presented was for the IEEE standard to be revised to cover the synchronization and exchange of data, requiring manufacturers to meet standards that could prevent RTLS attacks like this.

“We can’t afford to have those loopholes in standards,” Gordon says.

Keep reading PCMag for the latest from BlackHatBlackHat.

.

Categories
Technology

Microsoft, CISA Warn of Actively Exploited ‘DogWalk’ Windows Bug

Microsoft has warned its customers that a vulnerability known as DogWalk, which affects every recent version of Windows and Windows Server, is being actively exploited by attackers.

DogWalk (CVE-2022-34713) is a high severity vulnerability in the Microsoft Windows Support Diagnostic Tool (MSDT) that can be exploited to enable remote code execution on vulnerable devices, the company says in a Microsoft Security Response Center (MSRC) update.

There are many such devices; DogWalk affects Windows 7, 8.1, 10, and 11 as well as several versions of Windows Server, Microsoft says in the MSRC update. More than 1.4 billion devices currently run Windows 10 or 11 alone, the company says on its website.

Microsoft does reassure Windows users that “exploitation of the vulnerability requires that a user open a specially crafted file,” which means attackers can’t just force their way onto a vulnerable system, but it’s not particularly hard to get someone to open a malicious file .

“In an email attack scenario,” Microsoft says, “an attacker could exploit the vulnerability by sending the specially crafted file to the user and convincing the user to open the file.” Or they could upload the malicious file to a website and just wait for someone to download it.

This update has prompted the US Cybersecurity and Infrastructure Security Agency (CISA) to add CVE-2022-34713 to its Known Exploited Vulnerabilities catalogue. That means federal agencies have until Aug. 30 to patch their systems against the vulnerability.

That might not seem like a long time, especially since Microsoft released the Windows and Windows Servers patches related to DogWalk on Aug. 9 as part of Patch Tuesday. But attackers have known about this flaw in MSDT for at least 2.5 years at this point.

BleepingComputer reports that DogWalk was initially disclosed by a security researcher named Imre Rad in January 2020. Microsoft initially dismissed the report, Rad says, but now it’s finally released a fix and confirmed that attackers have exploited the flaw.

.

Categories
Technology

Hate Gmail’s New Look? Here’s How to Roll It Back

Google started rolling out a redesigned Gmail at the end of July, and as is the case with just about every interface change, not everyone loves it. I, for one, find the new colors distracting, the layout cramped, and the addition of more icons needless. If the new design hasn’t taken over your mail yet, it will any day now.

If you want to go back to the old Gmail look, you can do so in a few clicks.

  1. Open Gmail and click the Settings icon in the upper right corner.
  2. In the panel that appears, choose “Go back to the original view.”
  3. Before you can reload the interface and get your inbox back to the way it used to look, you also get an opportunity to tell the Gmail team why you’re choosing the old look instead. (Below I have a few suggestions for what you can tell them.)
  4. Once you either submit feedback or decline to give it by leaving the field blank and selecting Reload, your view refreshes and you’re returned to Gmail’s previous design.

    The settings location and


What Do You Think of Gmail’s New Look?

So, what might you put into that feedback box?

For starters, the left sidebar now feels more cramped than it did before. The addition of new icons in the far left certainly doesn’t help. And the color palette seems poorly thought out, with multiple shades of blue that aren’t complementary to one another. Mentioning any or all of these would be helpful, in my opinion.

Gmail's feedback field for explaining why you don't want its new look

To make your Gmail even better, see our list of the best tips for Gmail and three ways to improve your Gmail inbox.

.

Categories
Technology

Google Posts Yet Another Plea for Apple to Support RCS Messaging in iMessage

Google is making yet another attempt to persuade Apple to support the RCS phone-messaging standard in its own iMessage service, but this time it’s aiming the sales pitch at iPhone users.

At a “Get the Message” site posted Tuesday, Google calls out the least-common-denominator aspect of texts between iPhone and Android users: Everybody loses such features as encryption, typing indicators, and read receipts supported separately by Apple’s iMessage and the Google -backed Rich Communications Services (RCS), also called “chat features” in Android.

“Apple creates these problems when we text each other from iPhones and Android phones, but does nothing to fix it,” the page declares. “Apple turns texts between iPhone and Android into SMS and MMS, out-of-date technologies from the 90s and 00s.”

Subsequent paragraphs emphasize how iPhone users don’t only suffer the indignity of seeing Android-using friends’ messages in green bubbles but also miss features they enjoy in conversations with other iPhone users. For example: “Without read receipts and typing indicators, you can’t know if your Android friends got your text and are responding.”

Privacy also loses out in cross-platform conversations, the page notes: “SMS and MMS don’t support end-to-end encryption, which means those messages are not secure.”

(But while RCS supports end-to-end encryption in one-to-one Android chats, group Android chats today only get encryption in transit, with “e2e” security advertised as coming later this year. Bringing this same security to chats between different apps and different platforms would be much harder.)

Apple has never shipped an iMessage client for Android, and court documents unearthed during Fortnite’s lawsuit against Apple revealed that the Cupertino, Calif., company rejected an iMessage port because it might weaken iMessage’s customer lock-in effect.

Google has instead tried in vain to get Apple to add RCS support to iMessage–most recently, at its I/O developer conference in May. But while this latest sales pitch may win over some iPhone users, Apple has a history of ignoring requests from users that don’t square with its own product vision.

Google, meanwhile, has struggled to get RCS going in Android. It didn’t get all three major carriers lined up to ship its own Messages app until 2021, leaving an enormous installed base of Android phones running carrier- or manufacturer-specific messaging apps that don’t speak RCS. And Google still hasn’t persuaded Google to add RCS support to its own Google Voice calling and messaging service.

Finally, Google has yet to provide third-party developers with the coding framework they’d need to add RCS support to such SMS-capable apps as Signal and WhatsApp–the two services Google’s new page endorses as alternatives for iPhone users anxious to avoid today’s “broken experience” of cross-platform communication.

Developer posts in a thread on Signal’s site blame that on Google not providing the right API, and Google has yet to say when it might ship that framework.

.

Categories
Technology

Google’s ‘Read Along’ Learning Tool Now Available on the Web

Google is rolling out its Read Along learning tool for the web.

The app, which is supposed to help children learn how to read, has been exclusive to Android since it was released in India in 2019. (It was called Bolo at the time; Google changed the name for its global launch in 2020.) Now it’ll finally be available to kids without Android devices.

“With the web version,” Google says, “parents can let their children use Read Along on bigger screens by simply logging into a browser from laptops or PCs at readalong.google.com.” The site works in Chrome, Firefox, and Edge; support for additional browsers is “coming soon.”

Read Along has children read stories—which are curated by Google and feature varying subject matter and levels of complexity—to a “reading assistant” called Diya that “listens and gives both correctional and encouraging feedback to help kids develop their reading skills.”

Google says all of the audio processing required to enable this functionality happens on-device; the recordings aren’t supposed to be sent to its servers. More information about the kinds of data the company is collecting via the web version of Read Along is available via its privacy policy.

Google says more than 30 million children have read over 120 million stories via Read Along since the app’s debut in 2019. (Which suggests that many kids, or their parents, read just one story before they stop using the app.) The company will release more stories later this year.

.