picture: Playtonic Friends, MegaWobble / via Steam
Yooka-Laylee developer and publisher Playtonic Games has issued a warning via social media about a new scam that’s currently making the rounds – linked to a game you’re publishing.
Looks like ‘someone’ is offering MegaWobble’s beta test opportunity crocodile night game. It is slated to release on the Nintendo Switch and several other platforms this year. Apparently, it is actually a scam.
Here is the full list from Playtonic, which advises players not to click on any links provided in scams:
“It has come to our attention that someone is providing a beta tester for Lil Gator Game. We can confirm this is a scam and not from Playtonic or LilGatorGame. If we are providing this to our communities, we will announce it on Twitter and not via any other channels.
“Please do not click on the links in the scam! If you receive any suspicious messages claiming to be from Playtonic, please let us know. Stay safe everyone”
In Lil Gator, players will embark on a “wonderful” adventure – discover new friends and discover everything the island has to offer while climbing, swimming, tobogganing and sliding. You can learn about this game in our original story:
Keep an eye out for a Lil Gator release date announcement in the near future.
“Lifelong beer expert. General travel enthusiast. Social media buff. Zombie Maven. communicator.”
Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.
OpenAI’s DALL-E 2 is getting free competition. Behind it is an AI open-source movement and the startup Stability AI.
Artificial intelligence that can generate images from text descriptions has been making rapid progress since early 2021. At that time, OpenAI showed impressive results with DALL-E 1 and CLIP. The open-source community used CLIP for numerous alternative projects throughout the year. Then in 2022, OpenAI released the impressive DALL-E 2, Google showed Image and Parti, Midjourney reached millions, and Craiyon flooded social media with AI images.
Startup Stability AI now announced the release of stable diffusionanother DALL-E 2-like system that will initially be gradually made available to new researchers and other groups via a Discord server.
After a testing phase, Stable Diffusion will then be released for free – the code and a trained model will be published as open source. There will also be a hosted version with a web interface for users to test the system.
Stability AI funds free DALL-E 2 competitor
Stable Diffusion is the result of a collaboration between researchers at Stability AI, RunwayML, LMU Munich, EleutherAI and LAION. The research collective EleutherAI is known for its open-source language models GPT-J-6B and GPT-NeoX-20B, among others, and is also conducting research on multimodal models.
The non-profit LAION (Large-scale Artificial Intelligence Open Network) provided the training data with the open-source LAION 5B dataset, which the team filtered with human feedback in an initial testing phase to create the final LAION-Aesthetics training dataset.
Patrick Esser of Runway and Robin Rombach of LMU Munich led the project, building on their work in the CompVis group at Heidelberg University. There, they created the widely used VQGAN and Latent Diffusion. The latter served as the basis for Stable Diffusion with research from OpenAI and Google Brain.
Stability AI, founded in 2020, is backed by mathematician and computer scientist Emad Mostaque. He worked as an analyst for various hedge funds for a few years before turning to public work. In 2019, I have helped found Symmitree, a project that aims to lower the cost of smartphones and Internet access for disadvantaged populations.
With Stability AI and his private fortune, Mostaque aims to foster the open-source AI research community. His startup from him previously supported the creation of the “LAION 5B” dataset, for example. For training the stable-diffusion model, Stability AI provided servers with 4,000 Nvidia A100 GPUs.
“Nobody has any voting rights except our 75 employees — no billionaires, big funds, governments, or anyone else with control of the company or the communities we support. We’re completely independent,” Mostaque told TechCrunch. “We plan to use our compute to accelerate open source, foundational AI.”
Stable Diffusion is an open-source milestone
Currently, a test for Stable Diffusion is underway, with new additions being distributed in waves. The results, which can be seen on Twitter, for example, show that a real DALL-E-2 competitor is emerging here.
Stable Diffusion is more versatile than Midjourney, but has a lower resolution than DALL-E 2. | Image: Github
Unlike DALL-E 2, Stable Diffusion can generate images of prominent people and other subjects that OpenAI prohibits in DALL-E 2. Other systems like Midjourney or Pixelz.ai can do this as well, but do not achieve comparable quality with the high diversity seen in Stable Diffusion – and none of the other systems are open source .
turns out #stablediffusion can do really awesome interpolations between text prompts if you fix the initialization noise and slerp between the prompt conditioning vectors: pic.twitter.com/lWOoETYVZ3
Stable Diffusion is already expected to run on a single graphics card with 5.1 gigabytes of VRAM – bringing AI technology to the edge that until now has only been available through cloud services. Stable Diffusion thus offers researchers and interested parties without access to GPU servers the opportunity to experiment with a modern generative AI model. The model is also supposed to run on MacBooks with Apple’s M1 chip. However, image generation takes several minutes instead of seconds here.
OpenAI’s DALL-E 2 gets an open-source competition, led by an open-source community and startup Stability AI. | Image: Github
Stability AI itself also wants to enable companies to train their variant of Stable Diffusion. Multimodal models are thus following the path previously taken by large language models: away from a single provider and toward the broad availability of numerous alternatives through open source.
Runway is already researching text-to-video editing enabled by Stable Diffusion.
#stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://t.co/7SFUVKoUdl
Working on a more permissive release & painting checkpoints.
Of course, with open access and the ability to run the model on a widely available GPU, the opportunity for abuse increases dramatically.
“A percentage of people are simply unpleasant and weird, but that’s humanity,” Mostaque said. “Indeed, it is our belief this technology will be prevalent, and the paternalistic and somewhat condescending attitude of many AI fans is misguided in not trusting society.”
Most stresses, however, that free availability allows the community to develop countermeasures.
“We are taking significant safety measures including formulating cutting-edge tools to help mitigate potential harms across release and our own services. With hundreds of thousands developing on this model, we are confident the net benefit will be immensely positive and as billions use this tech harms will be negated.”
More information is available on the Stable Diffusion github. You can find many examples of Stable Diffusion’s image generation capabilities in the Stable Diffusion subreddit. Go here for the beta signup for Stable Diffusion.
Note: Links to online stores in articles can be so-called affiliate links. If you buy through this link, MIXED receives a commission from the provider. For you the price does not change.
Herschel ‘DrDisrespect’ Beahm is famous for his rant style and gameplay commentary, which has gained him huge fan support over the years. But with his recent tweet from him, it seems like he is not very happy with his YouTube streams from him.
ADVERTISEMENT
Article continues below this ad
12 years ago, when DrDisrespect joined YouTube he might have thought of making it big, but after the Twitch incident in 2020, his streaming career has seen a significant low of all times.
ADVERTISEMENT
Article continues below this ad
DrDisrespect seeks support from YouTube
The 40-year-old ‘Two Timer’ in a recent tweet on his personal account confessed that he has not been given the love and support that he deserves from YouTube. Regarding his YouTube streams of him, he said, “No follow, zero communication, absolutely no love.”
As a matter of fact, Doc has around 4.05 million subscribers on YouTube; but a lesser viewership than most streamers with a much lower subscriber count. This also raises some concerns about his hourly streaming ratio, which is getting significantly average day by day. Also, his arrival on YouTube permanently after his lawsuit with Twitch has left a scar on his streaming career.
It’s amazing to think the platform Doc streams on doesn’t support him one bit.
No follow, zero communication, absolutely no love.
The impact we’ve had on YouTube streaming growth is insane.
We’ve been taken advantage of…. Jesus YouTube, show some respect.
Notably, DrDisrespect sought some support and love from YouTube as he claims to have a massive impact on the streaming platform as a content creator. Regarding this, he said, “The impact we’ve had on YouTube streaming growth is insane.” He further complained to YouTube by adding, “We’ve been taken advantage of…Jesus YouTube, show some respect.”
Why can’t The Two Timer leave YouTube?
There is a reason why DrDisrespect can’t leave YouTube, and that is Twitch. In June 2020, when Twitch banned Doc for unknown reasons, the streamer had to look up to YouTube as his only way of streaming him.
Although many players switch from Twitch to YouTube for better pay and working conditions, Doc wasn’t given that facility because he chose to join YouTube as a secondary medium. If he would have joined YouTube Gaming by signing a contract, then the story would have been something else today.
ADVERTISEMENT
Article continues below this ad
It’s insane to see how the industry turned their back on you after twitch deal, no one wants to support you in any way shape or form, the only support system you have is ChampionsClub, Midnight Society, BoomTV Staff, and the growth on other social media platforms. But not one else
But since he wasn’t given any contract by YouTube to date, the question arises: will DrDisrespect be able to survive in such harsh competition? Notably, some may say that it is not very likely for him to stay relevant to the streaming community, but fans know that DrDisrespect is a fighter and will continue his journey no matter what.
ADVERTISEMENT
Article continues below this ad
WATCH THIS STORY:Biggest streamers who left Twitch and switched to YouTube
What do you think about this? Let us know in the comments below.
Popular livestreamer DrDisrespect criticized his platform yet again, admiring his channel’s growth despite YouTube allegedly showing him no support.
DrDisrespect famously was permanently banned from Twitch in June 2020, with the reason for his indefinite suspension still being unknown. Only weeks later, DrDisrespect found a new home for livestreaming on YouTube. Though by force, DrDisrespect was among the first major streamers to leave Twitch for another platform.
Since joining YouTube, the former streamer of the year has continued to build his audience and prominently stood as one of the platform’s most popular livestreaming creators. In a recent Tweet, however, DrDisrespect claims that the success he’s seen has come without the support of YouTube.
Tweeted from his personal Guy Beahm account, DrDisrespect wrote about his alter ego’s time on YouTube. “It’s amazing to think the platform Doc streams on doesn’t support him one bit,” he wrote, “No follow, zero communication, absolutely no love. The impact we’ve had on YouTube streaming growth is insane. We’ve been taken advantage of…Jesus YouTube, show some respect.”
It’s amazing to think the platform Doc streams on doesn’t support him one bit.
No follow, zero communication, absolutely no love.
The impact we’ve had on YouTube streaming growth is insane.
We’ve been taken advantage of…. Jesus YouTube, show some respect.
This is not the first time DrDisrespect has been openly critical of YouTube during its relatively short tenure on the platform. In May 2022, the streamer notably unfollowed YouTube after stating that the platform does not support its livestreaming division whatsoever.
Since DrDisrespect’s switch, countless other streamers such as Ludwig, Valkyrae, Sykkuno, CourageJD, TimTheTatman, and more have signed exclusive deals with YouTube. Though the platform’s roster of streamers has grown significantly, many creators and viewers alike have protested against the website’s support structure for streamers.
Though content with his success, DrDisrespect clearly thinks YouTube could be doing more to support its creators.
If you’re shopping for a VPN, you will have noticed VPN providers’ frequent mention of the Five, Nine, and Fourteen Eyes agreements in their marketing materials. What “eyes” are these, though, and how do they impact your privacy? Let’s take a look into a world filled with spooks, shady deals, and acronyms written in ALL CAPS.
The Five, Nine, and Fourteen Eyes
The Five, Nine, and Fourteen Eyes are agreements between the surveillance agencies (the “eyes”) of several countries. The original group is the Five Eyes (abbreviated as FVEY)—consisting of the US, the UK, Canada, Australia, and New Zealand—which shortly after the second world war signed a deal (the UKUSA pact) to share intelligence among each other .
Over the years, four other countries informally joined the original five (the Netherlands, France, Denmark, and Norway), making nine.
A few years later, five more joined (Belgium, Italy, Germany, Spain, and Sweden) to come to the grand total of 14.
However, these three groups are different from each other in what they share with each other.
Differences Between the Five, Nine, and Fourteen Eyes
Naturally, deals struck between spies aren’t accessible to regular people, but we do know a fair bit about these three groups, especially the original five. This is because their founding document, the UKUSA agreement, was made public in 2010. The British National Archives has the full text.
Probably the most important thing to highlight is that this deal isn’t explicitly between the governments of any of the countries involved, but between their spy agencies, those particularly tasked with what’s called signals intelligence or SIGINT in spy-speak, which boils down to communications surveillance like wire-tapping. In the case of the US, it’s the agency now called the NSA, while in Britain, this role is filled by GCHQ.
Of course, most of the governments involved were aware of the deal, though not all. The Australian government was kept in the dark until 1973, for example, which gives you an idea of the impunity with which these surveillance agencies were operating.
The purpose of the Five eyes was and is to automatically share information through the STONEGHOST network, as well as share technology and methods. The other two associations, the Nine and Fourteen Eyes, are removed one and two steps away from this inner circle, respectively.
Again, details are sketchy, but it appears the four extra members that make up the Nine Eyes have to request permission to get information and don’t receive everything, while the five that make up the Fourteen Eyes get even less.
On top of these “official” members, there also seem to be deals in place with countries like Israel and South Korea, though we don’t know much beyond that.
The Purpose of the Five, Nine, and Fourteen Eyes
The reason these surveillance agencies set up these agreements was, initially at least, to simply share information and methods. All these countries are allied to each other in NATO, it makes sense that they all work from the same set of facts and knowledge. However, we should be worried not about them working together against NATO’s enemies, but rather against their own populations.
In 2013 Edward Snowden, a former contractor for the CIA, revealed to the world that western governments were spying on their own people en masse. Agreements like the Five Eyes greatly aided that information gathering, not only through sharing data on countries’ citizens but also through more direct means.
For example, the NSA and GCHQ aren’t allowed to listen in on their own citizens’ communications without a warrant. So, if GCHQ wants to listen in on a British citizen’s phone calls, it would ask the NSA to do it, as it’s not bound by the same rules for British citizens. The GCHQ could then listen to US citizens’ calls.
Protection Against Surveillance: VPNs
As you can imagine, many people the world over were shocked to find out that, not only were their governments spying on them, they did so quite blatantly and never really stopped, even after the Snowden leaks. In response, many people turned to ways to protect their online communications.
First and foremost among these tools are virtual private networks, which can encrypt your internet connection and thus make it impossible for any third party, be they spies or marketers, to see what you’re doing online. VPN providers, unsurprisingly, jumped at the free marketing spy agencies were giving them and advertised their products as a great way to prevent this kind of snooping.
It should be said that this is true: if you’re worried about surveillance, either from the government or from elsewhere, a VPN is a great tool to use. It’s not the only one, nor are they bulletproof, but they’re a good option, especially if used with incognito mode.
Does It Matter if My VPN Is Based on the Five Eyes?
However, many VPNs go a step further than these claims and will tell you that any VPN based within the jurisdiction of the Five, Nine, or Fourteen Eyes is dangerous for users. We disagree: if the VPN you’re using is a trustworthy no-log VPN, one that doesn’t keep a record of what you’ve been up to online, then it doesn’t really matter where they’re based.
The whole point of a VPN is to avoid detection, so as long as the VPN itself is safe, it doesn’t matter where a VPN is based. The only exception is countries where VPN use itself is illegal (a pretty short list, thankfully.) Other than that, you should be okay. That said, if you’re particularly worried, you could always use a VPN that lets you sign up anonymously. That way, you can be sure nobody can track you—it doesn’t matter how many eyes they have.
Two new Samsung Galaxy Watches launched at the company’s August unpacked event. the Galaxy Watch 5 will replace last year’s Galaxy Watch 4, and a new Watch 5 Pro model brings beefier battery life and a more premium titanium body. CNET Editor at Large Scott Stein’s Galaxy Watch 5 impressions article focuses on some of these refinements, but over here I’ll compare the specs of the two Galaxy Watch 5 models against last year’s Galaxy Watch 4 — with a handy chart at the end!
As with the previous model, the Watch 5 comes in two sizes; 44mm and 40mm. The smaller size of the 40mm model means a lighter design which you might find more comfortable to wear, but it also means a smaller display and a smaller battery inside. You won’t have to compromise on any health-tracking features, with things like blood oxygen tracking, sleep and a new skin temperature sensor featured in both models.
More from Samsung Unpacked
The Watch 5 Pro comes in just the larger 44mm size, packing the same 1.36-inch display as the 44mm Watch 5, but adding a much larger 590-mAh battery, which could keep the Pro model going for a few days between charges. The titanium rather than aluminum design will also make it appeal to outdoor enthusiasts who want a more rugged smartwatch. That extra battery does come with a weight tradeoff however, with the Pro model tipping the scales at 46 grams against the Watch 5’s 32.8 grams.
Other specs like the various health-tracking tools, the waterproofing, memory, storage and Wear OS operating system are the same across all new models.
Now playing: Watch this:
Galaxy Watch 5: My First 2 Days With Samsung’s Watches
9:17
Galaxy Watch 5 and 5 Pro specs comparison chart
Galaxy Watch 5 Pro
Galaxy Watch 5
Galaxy Watch 4
Screen size
1.36-inch
1.36-inch (44mm); 1.19-inch (40mm)
1.4-inch (44mm); 1.2-inch (40mm)
screen resolution
450×450 pixels
450×450 pixels (44mm); 396×396 pixels (40mm)
450×450 pixels (44mm); 396×396 pixels (40mm)
Dimensions
45.4 x 45.4 x 10.5mm
44.4 x 43.3 x 9.8mm (44mm); 40.4 x 39.3 x 9.8mm (40mm)
44.4 x 43.3 x 9.8mm (44mm); 40.4 x 39.3 x 9.8mm (40mm)
When it comes to iconic notebook designs, the name Lenovo may come up often, but it’s usually for the company’s corporate ThinkPads. But in the world of 2-in-1 convertible laptops, Lenovo’s Yoga consumer line has been setting the agenda for a decade. The upscale Yoga 9i Gen 7 currently holds our Editors’ Choice award among premium convertibles, and the 14-inch Yoga 7i Gen 7 (starts at $879.99; $949.99 as tested) matches that machine’s excellence at a more affordable price. The 14-inch size is possibly perfect for a system that’s usable in laptop mode but small enough to tote around as a tablet, and the latest Yoga 7i 14 is a beautifully crafted 3.2-pound portable that earns an Editors’ Choice nod of its own . It may be the best Yoga yet.
Lenovo’s 7th Gen, Intel’s 12th
The $879.99 base model of the Yoga 7i 14 Gen 7 combines one of Intel’s latest Core i5-1235U processors, 8GB of memory, a 512GB PCIe 4.0 solid-state drive, and what Lenovo calls a 2.2K (2,240-by-1,400-pixel )IPS touchscreen. Our $949.99 test unit bumps up the processor to an Intel Core i7-1255U, and doubles the RAM allotment to 16GB. Other options include a more powerful Core i7-1260P CPU and a 1TB SSD. The flagship model swaps out the IPS panel for an OLED display with sharper 2,880-by-1,800-pixel resolution and 400 rather than 300 nits of brightness, selling for $1,799.99.
Available in Storm Blue or Arctic Gray, the Yoga 7i 14 is made of light but strong anodized aluminum, a sleek slab with rounded edges that are extremely comfortable to hold (and let you type without feeling as if the edge of the keyboard deck is going to slash your wrists). It measures 0.68 by 12.5 by 8.7 inches, nearly matching its rival the Dell Inspiron 14 7415 2-in-1 (0.71 by 12.7 by 8.4 inches), but is a fraction lighter at 3.2 versus 3.4 pounds.
There are plenty of ports for such a compact convertible. On the left side, you’ll find an HDMI video output, two USB-C Thunderbolt 4 ports, and a microSD card slot.
A USB 3.2 Type-A port is on the right, along with an audio jack for headphones or headsets and the power button. The assortment is a welcome contrast to ultraportables like the Apple MacBook Air and Dell XPS 13 Plus that offer only a couple of Thunderbolt 4 ports, forcing you to plug in an adapter or hub to use an external monitor or USB-A flash drive. Wireless support is also state of the art, with Wi-Fi 6E and Bluetooth 5.2.
Looking (and Sounding) Good
The Yoga 7i is all about the screen, which does double duty as both laptop display and tablet touch screen, and our test unit’s high-quality, 14-inch IPS panel does the job. The screen’s 16:10 aspect ratio is a bit taller, requiring a bit less scrolling than the familiar 16:9 ratio, and works well in tablet mode. The glossy display provides 10-point touch as well as active stylus support, but we were disappointed that the pen isn’t included.
Whether I was working on documents or watching videos, the display was colorful and sharp, looking especially vivid and fine when viewing HDR content on Netflix and other sources. Our objective tests backed up Lenovo’s claims, with the panel registering a full 100% of the sRGB color gamut and 324 nits of peak brightness. It should also be comfortable for long-term use, thanks to low-blue-light technology that minimizes the part of the spectrum most likely to fatigue or damage eyes.
Audio quality is just as good, thanks to a combination of stereo speakers, dual woofers, and a pair of tweeters. The array supports Dolby Atmos and a provided Smart Amplifier boosts volume when needed.
Keyboards have long been a Lenovo strength, and the Yoga 7i 14 Gen 7 is no exception. The keys offer a supremely comfortable typing feel, with a good depth of travel, substantial springiness with every keystroke, and Lenovo’s signature scalloped key design that’s both visually appealing and pleasantly tactile. Below the keyboard is a generous extra-wide touchpad, with a smooth glass surface and support for multitouch gestures. On a notebook without touch-screen and tablet capability, the pad alone would be great for comfortable navigation. On the touch-centric Yoga, it’s a welcome flourish that enhances the laptop experience.
Just above the display is a subtle raised section that Lenovo calls the Communication Bar. Besides providing a small lip that makes it easier to open the lid and get purchase on the smooth rounded edges, the bar houses the Windows Hello-compatible webcam without an Apple -like notch dipping into the screen area. The webcam offers better-than-average picture quality with 1080p resolution and has a built-in privacy shutter. Combining so many features in such a small, unobtrusive space is impressive.
Testing the Yoga 7i 14 Gen 7: Lightweights Handling Heavy Benchmarks
For our performance measurements, we pitted the Yoga 7i 14 against its convertible competitors the Dell Inspiron 14 7415 2-in-1 and Lenovo’s own step-up Yoga 9i, another Gen 7 model from earlier this year. We also compared it to the non-convertible HP Pavilion Plus 14 and 13.6-inch Apple MacBook Air M2, which may not have touch capability but are among the best compact travelers we’ve tested recently.
We test Windows laptops’ overall productivity with UL’s PCMark 10, which simulates everyday tasks like word processing, web browsing, and videoconferencing. Geekbench 5 is a more CPU-focused test that performs similar simulations including PDF rendering and speech recognition, while Maxon’s Cinebench uses that company’s Cinema 4D engine to render a complex image stressing all of a processor’s cores and threads.
Two other benchmarks combine CPU measurement with suitability for creative apps: HandBrake encodes a 12-minute clip of 4K video to a more compact 1080p file, while workstation vendor Puget Systems’ PugetBench for Adobe Photoshop uses the Creative Cloud 22 version of Adobe’s famous image editor to execute a variety of general and GPU-accelerated imaging tasks ranging from opening, rotating, and resizing a photo to applying masks, gradient fills, and filters. Low times in HandBrake and high scores in PugetBench indicate better suitability for digital content creation.
In our productivity-focused benchmarking, the Yoga 7i fell in the middle of what’s admittedly a high-performing pack. The affordable Yoga trailed the MacBook Air and Pavilion Plus but edged ahead of its rival Dell in most tests.
To test systems’ graphics capabilities, we use two game-like benchmarks from each of two test suites: the DirectX 12 subtests Night Raid and Time Spy from UL’s 3DMark for Windows, and the 1440p Aztec Ruins and 1080p Car Chase subtests from the cross- platform GFXBench. The latter two are rendered offscreen to accommodate different display resolutions.
Both the Yoga 7i 14 and HP Pavilion Plus 14 rely on Intel’s Iris Xe integrated graphics, which makes them suitable for casual gaming and streaming video but not a match for the discrete GPU of a true gaming laptop. The Apple M2 chip in the MacBook Air boasts more capable graphics performance
Finally, we test laptops’ battery life by looping the open-source Blender short video Tears of Steel, with Wi-Fi and keyboard backlighting off, display brightness at 50%, and audio volume at 100% until the system quits. We also use a Datacolor SpyderX Elite colorimeter and software to measure notebook screens’ color coverage and brightness in nits (candelas per square meter).
The Yoga 7i 14 is no slouch when it comes to battery life, showing 14 hours of unplugged stamina in our video rundown. Its screen also delivers great visual quality for a reasonably priced laptop, though it doesn’t match the showpiece OLED panels of the Yoga 9i and HP Pavilion Plus or the Retina display of the MacBook Air.
One Class Convertible
The Lenovo Yoga 7i 14 Gen 7 brings Intel’s latest silicon to a great 2-in-1 laptop, but it’s more than just a processor upgrade. The new model features some of the best industrial design we’ve seen, whether you’re focusing on its comfortably sculpted chassis to the not-a-notch webcam bar. Its performance and battery life rank with the best mainstream convertibles, and the whole package comes together so well it’s a standout. This is a first-class 2-in-1 laptop that earns a PCMag Editors’ Choice award.
It’s been a few weeks since Google unveiled the mid-ranger Pixel 6a, bringing several notable upgrades over previous Pixel-A series phones. Besides the availability of the Android 13 beta, the factory images and the kernel sources for the Pixel 6a have been published as well, which are just the right ingredients for the modding enthusiasts to start tinkering with the device.
We’re starting to see more and more people getting their hands on the Google Pixel 6a, so for those of you looking for some help in rooting your device, here is a simple step-by-step guide for you. This guide will walk you through how to unlock the bootloader of the Pixel 6a and gain root access on the phone using Magisk. TWRP, the most popular custom recovery there is, will take some time to be ported to the latest Pixels, hence the current rooting method is a bit more involved than what you might be used to.
Google Pixel 6a XDA Forums
You can root the Google Pixel 6a by patching its boot image with Magisk.
To flash the patched boot image, you have to unlock the bootloader of the Pixel 6a.
Unlocking the bootloader will wipe your Pixel 6a.
Navigate this article:
How to root the Google Pixel 6a
Before we get into how to root your Pixel 6a, you are going to want to do a few things. First, you will want to back up all the data on your phone. That’s because rooting requires wiping all the data on your phone, which includes not only installed apps but also all files saved to the internal storage.
You also want to make sure you have about 5GB of available storage on your PC, as you will need to download the factory image for your phone. After you’re done, though, you can delete these files to free up space. Nonetheless, it’s a good idea to keep the latest factory image saved in case you have any problems in your post-root adventure and need to restore to stock.
It is important to note these steps may not work on US carrier models of the Google Pixel 6a. Verizon, for example, likes to prevent bootloader unlocking altogether, making it impossible to root your phone. Sometimes, though, people find unofficial workarounds, and we’ll let you know if any are found.
Step 1 – Get the stock boot image for the Pixel 6a
Before we can root, we need to get our hands on the stock boot image that matches the current software build the phone is running. We will patch this boot image with Magisk.
To get the boot image, you need to extract it from the Pixel 6a factory image, a file that contains all the images of your phone needed to make a full restore. To make sure you download the right factory image, you need to check which software version your phone is currently running. To check this, go to Settings > About phone. At the bottom, look for the build number section. Find the matching build number on the factory image download page and download that file.
Download Android 12 for Google Pixel phones || Download Android 13 for Google Pixel phones
Next, extract the factory image ZIP file. Locate the image-bluejay-[version].zip file (yes, there’s a ZIP within a ZIP) and extract the boot.img file from it. This is the stock boot image, which you need to transfer this to your phone’s storage.
Step 2 – Patch the stock boot image using Magisk
With the boot image file on your phone, you are next going to need to download and install the latest Magisk app. In fact, you can patch it on a different Android device than the Pixel 6a, but you need to install the Magisk app on the secondary device as well.
Download Magisk
In the Magisk app, you will need to click the Install button on the topmost card. Choose Select and Patch a File under Method, and select the stock boot image. This will open the Android file picker. Go ahead and find the boot.img you transferred from your PC and select that. The Magisk app will patch the image to the Download folder on the phone. You must transfer this patched file (should be named as “magisk_patched_[random_strings].img”) back to your PC, because next, we’re going to unlock the bootloader which will wipe all data as we warned previously.
Notably, if you browse the XDA Forums for the Pixel 6a, you may be lucky enough to find a pre-patched boot image. It might save you the hassle of performing steps 1 and 2, but make sure that any Magisk-patched boot image you download matches your software build version, otherwise you will face several anomalies post-flashing. That’s why we always recommend to grab the official firmware and patch the stock boot image yourself.
Step 3 – Enable OEM unlocking and unlock the bootloader
In order to flash third-party software on the Pixel 6a, we have to unlock the bootloader. To do so, go to Settings > About phone > build number and tap on this entry 7 times to enable Developer options. After enabling it, go back to the main settings page and tap on Systemthen go to Developer options. From there, toggle the OEM unlocking option. Keep in mind that you need to enter your password/pattern/PIN to validate some of the actions.
After enabling OEM unlock, turn off your phone. Hold both the Volume down and Power buttons to turn your phone back on and boot into the bootloader menu. Assuming you have the latest ADB and Fastboot binaries installed already, you can also use the following command to reboot to the bootloader mode directly from Android.
adb reboot bootloader
Make sure to keep your phone plugged into your PC/Mac/Chromebook. Next, in a terminal window, type:
fastboot flashing unlock
You will see a screen telling you that you are about to initiate the bootloader unlocking process. Use the volume button to navigate and the power button to accept. Again, this will wipe all the data on your phone, so make sure you have your data backed up before proceeding.
You’ll see the warning every time you boot your phone after unlocking the bootloader
Step 4 – Flash Magisk-patched boot image
After the bootloader of your Pixel 6a is unlocked and your boot image is patched, you are just one step away from root.
As soon as the bootloader unlocking process completes, the phone will boot back up after a few minutes. Skip the setup wizard at this stage and turn off the phone. You now want to boot back into the bootloader by holding the volume down and power buttons again. Once you are there, connect the phone to your PC/Mac/Chromebook and execute the following command:
fastboot flash boot path/to/magisk_patched.img
As soon as you hit Enter, the patched boot image will be flashed to your phone. Next, reboot using fastboot reboot and the Magisk app should appear on your home screen and/or app drawer. If it doesn’t (eg you can only see a stub icon), just install the Magisk APK manually. This is all it takes to root your Pixel 6a.
Keep in mind that you’ll have to repeat steps 1, 2, and 4 every time you update your phone because the boot image changes with each update.
What’s next?
If you’re looking for things to do with your newly rooted Pixel 6a, take a look at our curated list of best root apps. Once your device is up and running with Magisk, you can also try out the some of the best Magisk modules to seamlessly apply complex mods without touching the underlying system.
The Google Pixel 6a is a mid-range smartphone with Google Tensor and a high-end camera.
Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.
The Californian organization Earth Species Project wants to decipher the language of animals. Artificial intelligence is supposed to make this possible.
The Earth Species Project (ESP) relies on open source. It is a non-profit organization founded in 2017, funded in part by donations from LinkedIn co-founder Reid Hoffman. The organization’s central concern is decoding non-human language.
The ten-person team believes that understanding non-human languages will deepen our connection to other species and strengthen our ability to protect them, and thus positively change our ecological footprint. ESP aims to achieve its goal in our lifetime. Along the way, it also wants to develop other technologies that are already helping biology and conservation.
Earth Species Project focuses on large-scale language models
If ESP has its way, artificial intelligence will enable the understanding of non-human language. Using machine learning to analyze communication and other behaviors in the animal kingdom is not new. A research group led by the University of Copenhagen demonstrated an AI system that analyzes pig grunts. Project CETI aims to translate sperm whale calls. DeepSqueak helps understand the calls of mice and rats.
The ESP team has set its sights much higher. It’s not about decoding the communication of a single species, but of all species. “We are species agnostic,” says Aza Raskin, co-founder of ESP. “The tools we develop (…) can work across all biology, from worms to whales.”
Raskin, his co-founders, and the team draw their inspiration from recent advances in natural language machine processing. Raskin cites work that he has shown machine learning can translate between numerous languageseven without prior knowledge, as the motivating intuition for ESP.
Communication is a multitude of vectors in multidimensional space
Algorithms that geometrically represent words or word components in multiple dimensions form the basis of these successes. Distance and direction to other words in space thereby represent rudimentary semantic relations of individual words to each other. In 2017, several publications showed that translations can be generated by superimposing the geometric representations of two languages.
Another advance was made, for example, in a 2018 paper from Facebook’s AI Lab. The team combined self-supervised training with back-translations. They achieved high translation quality without prior knowledge, by the standards of the time.
Today, giant language models translate up to 200 languages simultaneously, such as Meta’s NLLB-200. The ESP team wants to enable such representations for animal communication, both for single species and for numerous species simultaneously.
According to Raskin, this should also include non-verbal forms of communication such as bee dances. Such large-scale models could then be used to investigate, for example, whether there is overlap in geometric representations between humans and other creatures.
“I don’t know which will be more incredible – the parts where shapes overlap and we can directly communicate or translate, or the parts where we can’t,” Raskin said.
AI can help take off the human glasses
Raskin compares the journey to such a model to the journey to the moon – the road will be long and hard. Along the way, meanwhile, there would be many other problems to solve, and ESP has some ideas about how they plan to tackle them.
In a recently published paper, for example, the team looks at the “cocktail party problem.” This is basically about identifying individual voices in a social environment. Google, Meta and Amazon, for example, use AI solutions to this problem to better recognize voice input for their digital assistants.
The cocktail party problem also exists in the study of non-human communication, the team says. In their work, they provide an AI algorithm that can isolate individual animal voices in a natural soundscape.
In another project, an AI system generates random humpback whale calls and analyzes how they respond. The goal is to develop a system that learns to distinguish random changes from semantically meaningful ones. This brings us a step closer to understanding humpback whale calls, Raskin believes.
Yet another project will involve a self-supervised learning AI system learning the song repertoire of the Hawaiian crow, while another will use so-called ethograms to record all possible behavior patterns, their frequency, and general conditions for a species.
Will AI alone ultimately be enough to enable communication with other species? Raskin believes that AI will at least bring us a big step closer. Many species communicate in much more complex ways than previously thought. AI could help gather enough data and analyze it on a large scale, he said. Eventually, we might be able to take off our human glasses and understand entire communication systems, Raskin said.
Note: Links to online stores in articles can be so-called affiliate links. If you buy through this link, MIXED receives a commission from the provider. For you the price does not change.
LinkedIn has introduced new tools for people looking to share photos and videos on its platform.
That might seem like a strange announcement from a company dedicated to helping people share a semi-public version of their resume, form professional relationships, and look for jobs. But it turns out LinkedIn users have started to share more photos and videos on the platform.
The company says it’s seen a 20% year-over-year increase to “people adding visual content in their posts on LinkedIn.” So now it’s rolling out new features “to make it eveneasier to create visual content that helps you stand out and inspire your professional community.” (Emphasis theirs.)
The first of those features: clickable links in photos and videos. These links are displayed as buttons that LinkedIn users can resize and reposition to fit the composition of their “visual content,” thereby giving viewers one-click access to a “website, an upcoming event, recent newsletter, or other resources.”
LinkedIn has also created a variety of templates people can use to “easily create visually engaging content” by adorning their posts with their choice from “dozens of customizable backgrounds and fonts.” (Which is similar to the custom backgrounds Facebook allows people to use with posts on that platform.)
Both of those features are supposed to roll out “over the coming weeks.”
LinkedIn is also testing another feature, “carousels,” that it describes as “a new content format that allows you to mix images and videos to help your community learn in a digestible way.” The company says it’ll be “experimenting with carousels to see how members engage with it over the coming months.”