Purplefeed

Purplefeed

or

AI News 🤖

Publisher pulls horror novel ‘Shy Girl’ over AI concerns

Hachette Book Group said it will not be publishing “Shy Girl” over concerns that artificial intelligence was used to generate the text.

2 sources

1 hour ago - 19:30

Why Wall Street wasn’t won over by Nvidia’s big conference

Despite investor fears of an AI bubble, Nvidia's latest conference shows that most in the industry aren't concerned by that possibility.

4 hours ago - 16:28

OpenAI reportedly plans to double its workforce to 8,000 employees

While other tech companies have been laying off employees year after year, OpenAI is doing the opposite. According to a report from the Financial Times, the AI giant is looking to expand its workforce to 8,000 employees by the end of 2026, nearly doubling staff from its current headcount of 4,500. The FT reported that the new hires will be across several departments, including product development, engineering, research and sales. OpenAI's hiring spree will also include "specialists" for "technical ambassadorship," or employees tasked with helping businesses better utilize its AI tools, according to the report. As the FT noted, OpenAI is likely trying to amp up the competition against Anthropic and its Claude AI chatbot. According to the AI Index from Ramp, a fintech startup that manages corporate expenses, businesses are now 70 percent more likely to go with Anthropic when buying AI services for the first time as opposed to OpenAI. OpenAI made waves in February when it announced a contract with the Department of Defense to use its AI models, following a public fallout between Anthropic and the federal agency. On top of the government contract, OpenAI is also in "advanced talks" with private equity firms like Brookfield Asset Management to deploy its AI tools across a firms' portfolio of companies, according to Reuters. This article originally appeared on Engadget at https://www.engadget.com/ai/openai-reportedly-plans-to-double-its-workforce-to-8000-employees-161028377.html?src=rss

4 hours ago - 16:10

DNA building blocks on asteroid Ryugu, bacteria that eat plastic waste, and more science news

Remember when Japan sent a spacecraft to an asteroid 180 million miles away to scoop some dirt off the surface? Six years on from its arrival to Earth, that sample has yielded some insights about what may have seeded life on our planet. Read on to learn more about the latest findings, and other science news we found interesting this week. DNA ingredients on Ryugu In 2020, a capsule from the Japanese space probe Hayabusa2 returned to Earth with samples collected from the surface of asteroid Ryugu, and scientists have spent the subsequent years analyzing those materials for clues about the conditions that existed in the early solar system. This week, researchers from Japan reported an exciting discovery: the Ryugu samples contain the five building blocks of DNA and RNA. The findings, coupled with those from other recent studies, could put us closer to understanding how the ingredients for life first made it to Earth billions of years ago. The study, published in the journal Nature Astronomy, found the nucleobases adenine, guanine, cytosine, thymine and uracil — all of which were also found in samples gathered from a different asteroid, Bennu, last year, and before that in meteorites dubbed Murchison and Orgueil. This suggests these nucleobases were widespread in the early solar system, and supports the hypothesis that carbonaceous asteroids like Ryugu and Bennu transported them to Earth, the authors explain in the paper. Ammonia was discovered in the samples as well, which may play a role in how these nucleobases formed. The discovery of these building blocks "does not mean that life existed on Ryugu," Toshiki Koga, the study's lead author from the Japan Agency for Marine-Earth Science and Technology, told AFP. "Instead, their presence indicates that primitive asteroids could produce and preserve molecules that are important for the chemistry related to the origin of life." Bacteria collaborate to eat plastic waste Researchers in Germany have identified a trio of bacteria that can digest a common plastic additive, but only when working together. The study published in the journal Frontiers in Microbiology found that a "consortium" of bacterial strains (two from species in the genus Pseudomonas and one from Microbacterium) was able to break down several phthalate esters (PAEs), which are often used to make plastic materials more flexible. These chemicals are increasingly finding their way into the environment as plastic pollution grows, and research suggests they can have harmful effects on human health and that of wildlife. The team focused on microbes that could be found right at home in their own lab, taking a sample of biofilm that had formed on the polyurethane tubing of a bioreactor. This sample was then incubated in a growth medium containing the PAE diethyl phthalate (DEP) as the main source of carbon and energy. They eventually ended up with a stable culture of bacteria that could break down DEP, as long as the DEP concentration didn't exceed 888 milligrams per liter, according to a press release. The consortium could gobble up all the DEP in 24 hours at 30 degrees C. It was also able to grow on the PAEs dimethyl phthalate, dipropyl phthalate and dibutyl phthalate. The researchers identified the bacteria in the consortium through DNA sequencing, but found that they were not individually able to tackle the PAEs, suggesting they break down the chemicals through a "cooperative process" known as cross-feeding. The consortium could make for another tool in the pollution-fighting toolbox, with potential to help break down PAEs in contaminated areas or speed up the degradation of plastics that contain PAEs by making them more brittle. "This approach may also be effective in treating industrial plastic waste streams," they note. Hubble witnesses a breakup Newly released images from the Hubble Space Telescope show the unexpected breakup of Comet C/2025 K1 (ATLAS) — Comet K1, for short — as it made its way out of the solar system back in November. A team of researchers that initially set out to observe a different comet ended up switching targets due to technical issues, only to catch Comet K1 right after it started crumbling. Hubble captured three 20-second images between November 8 and November 10 2025, the first of which the team estimates was about eight days after the fragmenting started. During the observation period, one of the comet's smaller pieces began to break up too. Talk about being in the right place at the right time. "Never before has Hubble caught a fragmenting comet this close to when it actually fell apart," said John Noonan, a research professor in the Department of Physics at Auburn University, in a statement. "Most of the time, it’s a few weeks to a month later. And in this case, we were able to see it just days after." You can read more about the rare sighting here. Before you go, be sure to check these stories out too: • States are suing the EPA for relinquishing its role as a greenhouse gas emissions regulator • Blue Origin also wants to put AI data centers in space This article originally appeared on Engadget at https://www.engadget.com/science/dna-building-blocks-on-asteroid-ryugu-bacteria-that-eat-plastic-waste-and-more-science-news-150000975.html?src=rss

5 hours ago - 15:00

The AirPods Pro 3 are $50 off right now, nearly matching their best-ever price

Less than a week ago, Apple announced the forthcoming AirPods Max 2, a pair of over-ear headphones that leverage the company’s H2 chip for AI-powered live translation, conversation awareness, and a host of newer features. However, if you’re okay with a pair of earbuds, the AirPods Pro 3 offer access to all the same features […]

5 hours ago - 14:54

Twitter turned 20 and I feel nothing

Twitter is officially 20 years old. In another reality, that might make me kind of nostalgic. I've been lurking and scrolling and tweeting for 16 years; most of my adult life. There was a time when Twitter was a place where some internet strangers became my IRL friends, when I was excited to "live-tweet". When my infinitely more well-adjusted friends would send me memes, I would smugly say "I saw that on Twitter days ago." Twitter stopped being that place a long time ago, but I don't have any nostalgia for it. I don't really feel anything at all, actually. Because I can already hear the comments: Yes, I'm still on X. I don't spend as much time there as I did a decade ago, but it's still quite a lot of time, an unhealthy amount, if I'm being honest. My job is to report on social media companies, so I keep (doom)scrolling. That's what I tell myself anyway. A few of my favorite posters are still around. Dril's still got it. The memes are still, occasionally, good, even though X's recommendation algorithm seems to prefer pointing me toward endless AI slop, boring hot takes from thirsty mid-tier tech execs and blatant engagement bait. X's algorithm — what little we can learn about it, anyway — now relies on Grok's predictions about what you'll like.The same Holocaust-loving Grok that has spewed racism and referred to itself as MechaHitler and declared Elon Musk "the single greatest person in modern history." The same Grok that allegedly generated thousands of images of child abuse material. Hey @grok is that true? X is not Twitter but it's also not not-Twitter. Last year, an online marketplace startup bought the 560-pound Twitter bird that once adorned the company's San Francisco office and blew it up in a Nevada desert surrounded by Tesla CyberTrucks as part of an elaborate publicity stunt. Dumb? Yes. But also a somehow fitting adieu for "Larry." just setting up my twttr — jack (@jack) March 21, 2006 It's been 20 years since Jack Dorsey sent the first-ever tweet, which was never even a good tweet anyway. It's been five years, by the way, since he turned that tweet into an NFT (remember NFTs??) and auctioned it for nearly $3 million. It's now functionally worthless. Another chapter in Dorsey's confusing, complicated legacy. This article originally appeared on Engadget at https://www.engadget.com/social-media/twitter-turned-20-and-i-feel-nothing-140000602.html?src=rss

6 hours ago - 14:00

Aiper Scuba V3 Pool Robot Review: Eye on the Prize

Now outfitted with AI computer vision, this new pool cleaner can actively search for debris.

9 hours ago - 11:02

I Tried DoorDash’s Tasks App and Saw the Bleak Future of AI Gig Work

I recorded videos of myself doing laundry, scrambling eggs, and walking around the park in DoorDash’s new Tasks app, where gig workers are paid to train AI.

3 sources

9 hours ago - 11:00

DoorDash will start paying gig workers for creating content to train AI models

DoorDash has launched a new option for its gig economy workers to earn some extra cash. The delivery service introduced Tasks, which it describes as "short activities Dashers can complete between deliveries or in their own time." It gives taking pictures of restaurant dishes or recording video of unscripted conversations in languages other than English as examples. These materials will be used to train artificial intelligence and robotics models. A representative from DoorDash told Bloomberg News that it will use Tasks content for evaluating its in-house AI models as well as those made by its partner companies in retail, insurance, hospitality and tech. DoorDash is piloting a standalone app for Tasks where Dashers will submit their content. The blog post notes that pay will be displayed upfront, and compensation will vary based on the complexity of the activity. This idea isn't new. We've seen other startups in AI and robotics offering payment for content filmed by regular people. Considering how many lawsuits are underway against AI companies that have already benefited from unauthorized use of copyrighted materials, at least this approach lets people be directly compensated for training content. This article originally appeared on Engadget at https://www.engadget.com/ai/doordash-will-start-paying-gig-workers-for-creating-content-to-train-ai-models-204048743.html?src=rss

yesterday, 20:40

DoorDash launches a new ‘Tasks’ app that pays couriers to submit videos to train AI

Delivery couriers will be able to earn money by completing activities like filming everyday tasks or recording themselves speaking in another language.

March 19 at 16:14

Meet the executive with Silicon Valley's trickiest job

Nearly 8 months into her job as OpenAI's CEO of Applications, Fidji Simo must start minting money while convincing staffers the mission hasn't changed.

10 hours ago - 10:01

Even if AI doesn't take your job, it might dent your paycheck

Companies are investing heavily in AI, and layoffs aren't the only option for offsetting those costs.

10 hours ago - 09:53

Miro's CEO says companies should treat spending on AI as part of their employee learning budget

Miro CEO Andrey Khusid says the company gives employees "unlimited" access to the latest AI tools as a way to speed how quickly they learn and work.

11 hours ago - 09:00

Gemini task automation is slow, clunky, and super impressive

I've been testing out Gemini's new task automation on the Pixel 10 Pro and the Galaxy S26 Ultra, which for the first time lets Gemini take the wheel and use apps for you. It's limited to a small subset right now - a handful of food delivery and rideshare services - and it's still in […]

18 hours ago - 02:35

New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon's assertion that the AI company poses an "unacceptable risk to national security" and arguing that the government's case relies on technical misunderstandings and claims that were never actually raised during the months of negotiations.

18 hours ago - 01:40

Anthropic Denies It Could Sabotage AI Tools During War

The Department of Defense alleges the AI developer could manipulate models in the middle of war. Company executives argue that’s impossible.

20 hours ago - 24:03

You don't need an AI degree to land an AI-powered internship

Online job site Indeed curated a list of internship roles that provide AI experience, without requiring a fancy AI degree.

21 hours ago - 23:16

The gen AI Kool-Aid tastes like eugenics

Like many people, director Valerie Veatch was intrigued when OpenAI first released its Sora text-to-video generative AI model to the public in 2024. Though she didn't fully understand the technology, she was curious about what it could do, and she saw that other artists were building online communities to share their new AI creations. The […]

22 hours ago - 22:22

There Aren't a Lot of Reasons to Get Excited About a New Amazon Smartphone

The company is reportedly building a new AI-powered mobile device. If Amazon follows through on the plan, experts warn it would be next to impossible to break into a crowded market.

22 hours ago - 22:03

Tech Memo interview: Talking 'atoms' and 'bits' with Eclipse's Joe Fath

Eclipse's Joe Fath explains AI's role in transforming physical industries, with Travis Kalanick's Atoms and Jeff Bezos making similar moves.

22 hours ago - 21:42

You're laughing. The metaverse is dying, and you're laughing.

Instead of hanging out with no legs, we're stuck with in AI slop purgatory.

23 hours ago - 21:31

Microsoft rolls back some of its Copilot AI bloat on Windows

The company is reducing Copilot entry points on Windows, starting with Photos, Widgets, Notepad, and other apps.

23 hours ago - 20:53

Microsoft will yank Copilot from some Windows apps and let you move the taskbar again

After one too many of you threatened to switch to Linux, Microsoft has published a long list of changes it plans to make to Windows 11. In a lengthy blog titled "Our commitment to Windows quality," Pavan Davuluri, the executive vice president of Windows and Devices, said the company has spent a "great deal" of time in recent months reading feedback from users. "What came through was the voice of people who care deeply about Windows and want it to be better," he said. To that end, Windows Insiders can expect to see some of the changes Microsoft plans in response to all criticism begin rolling out starting this month. Most notably, Microsoft ease up on the AI pedal. "You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well-crafted," writes Davuluri. As a first step, Microsoft says it will remove "unnecessary Copilot entry points," starting with apps like the Snipping Tool, Photos, Widgets and Notepad. Elsewhere, users can look forward to additional taskbar customization, allowing them to position the interface element at the top or sides of the screen; less disruptive updates, with the option to shut down or restart your device without being forced to install a new patch; and a faster, less janky File Explorer. "Our first round of improvements will focus on a quicker launch experience, reduced flicker, smoother navigation and more reliable performance for everyday file tasks," said Davuluri. Microsoft's promise to fix Windows 11 is long overdue. In January, the company released a couple of emergency updates after what should have been a routine security patch caused bugs that left some PCs unable to shut down and broke Outlook. The general state of the operating system has led many to explore Linux alternatives like Bazzite. With Apple also recently releasing the $600 MacBook Neo, a laptop that few PC manufacturers can match right now, Microsoft’s dominance in the PC market is looking vulnerable for the first time in more than a decade. This article originally appeared on Engadget at https://www.engadget.com/computing/microsoft-will-yank-copilot-from-some-windows-apps-and-let-you-move-the-taskbar-again-202857203.html?src=rss

yesterday, 20:28

What happened at Nvidia GTC: NemoClaw, Robot Olaf, and a $1 trillion bet

CEO Jensen Huang took the stage at Nvidia’s GTC conference this week in his signature leather jacket to deliver a two-and-a-half-hour keynote, projecting $1 trillion in AI chip sales through 2027, declaring that every company needs an “OpenClaw strategy,” and closing with a rambling Olaf robot that had to get its mic cut. The message was hard to miss: Nvidia […]

yesterday, 20:02

Future Sony PlayStation games will use AI to imagine new frames

Mark Cerny, the lead architect of the PlayStation 5 and PS5 Pro, told Digital Foundry that ML-based frame generation tech is coming to "PlayStation platforms" in the future, letting the game console use AI to imagine new frames between the ones it's actually rendering, which can create smoother perceived image quality while (typically) introducing some […]

yesterday, 19:49

The White House proposes new AI policy framework that supersedes state laws

The White House has announced a new AI policy framework that calls for Congress to craft federal regulation that overrules state AI laws. The Trump administration has made multiple attempts to overrule more restrictive state-level AI regulation, but has failed so far, most notably in the passing of the “One Big Beautiful Bill.” The framework focuses on a variety of topics, covering everything from child privacy to the use of AI in the workforce. “Importantly, this framework can succeed only if it is applied uniformly across the United States,” The White House writes. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.” Developing… This article originally appeared on Engadget at https://www.engadget.com/ai/the-white-house-proposes-new-ai-policy-framework-that-supersedes-state-laws-192251995.html?src=rss

3 sources

yesterday, 19:22

Gamers Hate Nvidia's DLSS 5. Developers Aren’t Crazy About It Either

Nvidia’s new AI upscaling gaming technology struck gamers as uncanny and off-putting. Developers don't seem to like it either, but it could be “the default” in a few years.

yesterday, 19:13

This is Microsoft’s plan to fix Windows 11

Microsoft has faced a breakdown of trust in Windows 11 and a backlash over AI additions to its operating system in recent months. After promising to rebuild trust in Windows earlier this year, Microsoft's Windows chief, Pavan Davuluri, is now revealing the company's plan to fix Windows 11 - and there are a lot of […]

yesterday, 19:01

Three people have been charged with illegally exporting NVIDIA GPUs to China

The US Attorney's Office for the Southern District of New York has charged three people with illegally exporting NVIDIA GPUs to China in violation of the Export Control Reform Act. NVIDIA's chips have become a critical component in the rush to train and run increasingly complex artificial intelligence models, one the US has sought to manipulate with export controls and profit-sharing schemes with NVIDIA. The three people, Yih-Shyan "Wally" Liaw, Ruei-Tsang "Steven" Chang and Ting-Wei "Willy" Sun, two employees and one contractor working for US IT company Super Micro Computer, allegedly circumvented export control laws via a multi-step scheme that involved creating fake orders for servers with NVIDIA chips from Southeast Asian companies, that were then secretly sent to China. The plan involved paying a logistics company to repackage the servers in Taiwan, staging dummy servers to be inspected by Super Micro Computer's compliance team and falsifying records so Liaw, Chang and Sun's employer was unaware where the servers were actually being sent. The DOJ claims Liaw, Chang and Sun facilitated the illegal purchase of $2.5 billion worth of servers between 2024 and 2025 in direct violation of US export laws. Super Micro Computer is not named as a defendant in the US Attorney's indictment, but the company's stock price has been impacted by the scheme, CNBC writes. In a statement released on Thursday, Super Micro Computer announced that it's distancing itself from Liaw, Chang and Sun. "The individuals charged are Yih-Shyan "Wally" Liaw, Senior Vice President of Business Development and a member of the Company's Board of Directors; Ruei-Tsang "Steven" Chang, a sales manager in Taiwan; and Ting-Wei "Willy" Sun, a contractor," the company writes. "Supermicro has placed the two employees on administrative leave and terminated its relationship with the contractor, effective immediately." This isn't the first time people have attempted to illegally smuggle NVIDIA's products out of the US, and it likely won't be the last time. Reportedly $1 billion worth of NVIDIA's AI chips were illegally sold in the three months after the Trump administration tightened export controls, and back in December 2025, Texas authorities seized more than $50 million worth of NVIDIA GPUs bound for China. As long as there's demand for AI, there'll be demand for the hardware that makes it possible. This article originally appeared on Engadget at https://www.engadget.com/ai/three-people-have-been-charged-with-illegally-exporting-nvidia-gpus-to-china-184928430.html?src=rss

yesterday, 18:49

Kodiak CEO says making trucks drive themselves is only half the battle

This year is shaping up to be a big one for self-driving trucks. In addition to Aurora's plan to deploy hundreds of autonomous big rigs and Waabi expanding into robotaxis, you've also got Kodiak AI aiming to launch its own fully driverless long-haul freight operation by the end of 2026. While robotaxis may still win […]

yesterday, 18:25

Judge throws out Sam Altman's sister's lawsuit accusing him of sexual abuse— but leaves door open to refile

A federal judge greenlit OpenAI CEO Sam Altman's defamation countersuit against his sister, who accused him of sexual abuse.

yesterday, 17:48

NVIDIA's CEO Projects $1 Trillion in AI Chip Sales as New Computing Era Begins

Wall Street Journal

yesterday, 17:30

Box CEO says companies will need to figure out how to budget for workers running up AI token bills

"Their compute budgets are just going to monotonically go up over time," Box CEO Aaron Levie wrote about workers who properly leverage AI.

yesterday, 16:56

WordPress.com now lets AI agents write and publish posts, and more

New AI agents on WordPress.com could lower barriers to publishing while increasing machine-generated content across the web.

yesterday, 16:43

Amazon is reportedly developing an AI-centric smartphone

Amazon's second smartphone could forego an app store.

yesterday, 16:38

Goldman Sachs maps out where it's pushing AI — and the risks that could upend its strategy

In a new shareholder letter, Goldman Sachs' leaders offered insight into how the bank is navigating the competitive AI revolution.

yesterday, 16:10

AI startups are eating the venture industry and the returns, so far, are good

AI startups accounted for 41% of the $128 billion in venture dollars raised by companies on Carta last year — a record-high annual share.

yesterday, 15:41

11 of the most interesting quotes from Claude users about their hopes and fears for AI

One engineer said they lied to their employer about how long it took to build a new feature. Others were uneasy about creating company AI plans.

yesterday, 15:31

At Palantir’s Developer Conference, AI Is Built to Win Wars

As business soars, Palantir is doubling down on a vision of AI built for battlefield advantage—and attracting customers who agree.

yesterday, 15:00

Google Search is now using AI to replace headlines

Since roughly the turn of the millennium, Google Search has been the bedrock of the web. People loved Google's trustworthy "10 blue links" search experience and its unspoken promise: The website you click is the website you get. Now, Google is beginning to replace news headlines in its search results with ones that are AI-generated. […]

yesterday, 14:23

Amazon is reportedly working on a new phone built around Alexa

Amazon is reportedly planning to re-enter the smartphone market more than 10 years after its last attempt. According to a Reuters report, the mysterious phone is internally codenamed "Transformer" and is being developed by the company’s devices and services unit. There isn’t a whole lot to go on right now, but it probably won’t surprise many to learn that the phone will likely lean heavily on AI. According to Reuters’ sources, Alexa functionality would be a core part of the experience, but Amazon wouldn’t necessarily build a custom OS around its voice assistant. The phone would make buying products on Amazon and using services like Prime Music and Prime Video "easier than ever," and may bypass traditional app stores. Reuters reports that the Transformer project is being led by the recently established ZeroOne, an Amazon devices unit headed up by ex-Microsoft executive and Xbox co-founder J Allard, who was also one of the creators of Zune. Allard joined Amazon last year to lead a "a special projects team dedicated to inventing breakthrough consumer product categories." The development team has reportedly considered launching both a traditional smartphone and a so-called "dumbphone," which would presumably strip away anything that needlessly distracted you from the Amazon empire. Reuters’ anonymous sources suggest the latter could help combat screen addiction by offering fewer features. ZeroOne is apparently inspired by the ultra-minimalist Light Phone, suggesting that Amazon might be reluctant to take on the flagship devices of Apple and Samsung. The report adds that the Transformer phone could even be positioned as a secondary handset. This, of course, would not be Amazon’s first crack at the smartphone business. The company launched the Fire Phone in 2014, an ambitious and interesting device that ultimately failed to tempt people away from the more established smartphone ecosystems. It’s widely remembered as perhaps the company’s biggest hardware misstep. With analysts forecasting an unprecedented decline in the smartphone market in 2026, now seems like a risky time for Amazon to try again, and Reuters was unable to determine exactly how much the company has committed to the Transformer project. Sources also wouldn’t rule out it being scrapped altogether if the company’s priorities suddenly shifted. This article originally appeared on Engadget at https://www.engadget.com/mobile/amazon-is-reportedly-working-on-a-new-phone-built-around-alexa-142244500.html?src=rss

2 sources

yesterday, 14:22

Why people really hate AI

There's a big, and increasing, disconnect in culture right now when it comes to artificial intelligence. Companies of all shapes and sizes are hunting for places to deploy AI and can't stop talking about how this new technology will change everything. But when you ask people about AI, the consistent response is: no thanks. Study […]

yesterday, 13:27

ByteDance is selling its Moonton game unit to Savvy Games for a cool $6 billion

Following discussions first reported on earlier this year, ByteDance has agreed to sell its games unit Moonton to Savvy Games Group for $6 billion. Moonton is known for mobile titles popular in Asia like Mobile Legends: Bang Bang, which has been downloaded 1.5 billion times. The transaction is set to be finalized in the "near future," according to an internal memo from Moonton's CEO seen by Bloomberg. ByteDance has been winding down its gaming arm and shopping Moonton since 2023, just two years after it first acquired the developer. Around that same period, the TikTok parent was shuttering its Nuverse gaming arm, which published notable titles like Marvel Snap and Ragnarok X: Next Generation. The company has since shifted its focus to AI, competing with Chinese rivals to develop chatbots and foundational models. Savvy Games, which is owned by Saudi Arabia's Public Investment Fund (PIF), has been going in the opposite direction. Last year the company (via its subsidiary Scopely) acquired Pokémon Go developer Niantic for $3.5 billion. PIF was also among the key investors that purchased Electronic Arts in a blockbuster $55 billion deal last year. The Saudi fund holds a 7.5 percent stake in Nintendo as well. The sale is the latest chapter in the recent gaming industry consolidation that saw around 45,000 jobs lost in a brutal three-year period between 2022 and 2025. According to a recent GDC study, one-third of US video game industry workers were laid off over the last two years. This article originally appeared on Engadget at https://www.engadget.com/gaming/bytedance-is-selling-its-moonton-game-unit-to-savvy-games-for-a-cool-6-billion-124131595.html?src=rss

yesterday, 12:41

Engadget Podcast: Why does everyone hate NVIDIA's DLSS 5 AI upscaling?

NVIDIA started an online firestorm this week when it announced DLSS 5 at its GTC conference. The company claims it's meant to deliver "photorealistic" lighting and materials in games by using neural processing. But it differs considerably from previous versions of DLSS, which were focused on using machine learning to upscale lower resolutions and generate additional frames, and gamers online aren’t too happy. To help us break this down, Anshel Sag, VP and principal analyst at Moor Insights and Strategy joins us to discuss his experience with NVIDIA's DLSS 5 demos. Also, we dive into what's next for Xbox with Project Helix. Subscribe! • iTunes • Spotify • Pocket Casts • Stitcher • Google Podcasts Topic • NVIDIA announced DLSS 5, the disgust was immediate (with Anshel Sag from Moor Insights & Strategy) – 0:51 • Arizona attorney general sues Kalshi for operating an illegal gambling business – 36:22 • Polymarket users threaten the life of a reporter at The Times of Israel over accurate reporting – 36:59 • Apple announces AirPods Max 2 with improved noise cancellation – 44:33 • Elon Musk’s xAI faces class action suit over facilitating CSAM dsitribution – 47:38 • Samsung stops selling Galaxy Z TriFold after 3 months because components got too expensive – 51:22 • Around Engadget: Apple Studio XDR review, Dell XPS 16 review – 53:49.346 • Listener Mail: Stick with iPhone on Linux? And are there any good Android tablets? – 55:41 • Pop culture picks – 58:46 Credits Hosts: Devindra Hardawar Guest: Anshel Sag Producer: Ben Ellman Music: Dale North and Terrence O’Brien This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/engadget-podcast-why-does-everyone-hate-nvidias-dlss-5-ai-upscaling-121335918.html?src=rss

2 sources

yesterday, 12:13

Engadget Podcast: Why does everyone hate NVIDIA's DLSS 5 AI upscaling?

NVIDIA started an online firestorm this week when it announced DLSS 5 at its GTC conference. The company claims it's meant to deliver "photorealistic" lighting and materials in games by using neural processing. But it differs considerably from previous versions of DLSS, which were focused on using machine learning to upscale lower resolutions and generate additional frames, and gamers online aren’t too happy. To help us break this down, Anshel Sag, VP and principal analyst at Moor Insights and Strategy joins us to discuss his experience with NVIDIA's DLSS 5 demos. Also, we dive into what's next for Xbox with Project Helix. Subscribe! • iTunes • Spotify • Pocket Casts • Stitcher • Google Podcasts Topic • NVIDIA announced DLSS 5, the disgust was immediate (with Anshel Sag from Moor Insights & Strategy) – 0:51 • Arizona attorney general sues Kalshi for operating an illegal gambling business – 36:22 • Polymarket users threaten the life of a reporter at The Times of Israel over accurate reporting – 36:59 • Apple announces AirPods Max 2 with improved noise cancellation – 44:33 • Elon Musk’s xAI faces class action suit over facilitating CSAM dsitribution – 47:38 • Samsung stops selling Galaxy Z TriFold after 3 months because components got too expensive – 51:22 • Around Engadget: Apple Studio XDR review, Dell XPS 16 review – 53:49.346 • Listener Mail: Stick with iPhone on Linux? And are there any good Android tablets? – 55:41 • Pop culture picks – 58:46 Credits Hosts: Devindra Hardawar Guest: Anshel Sag Producer: Ben Ellman Music: Dale North and Terrence O’Brien This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/engadget-podcast-why-does-everyone-hate-nvidias-dlss-5-ai-upscaling-121335918.html?src=rss

yesterday, 12:13

The best AI investment might be in energy tech

Power has become one of the biggest bottlenecks in rolling out new AI data centers. That's creating an opening for investors.

yesterday, 12:00

Blue Origin also wants to put AI data centers in space

Blue Origin has revealed its plans for an orbital AI data center system in a new filing with the Federal Communications Commission. The company has asked the agency for permission to deploy 51,600 satellites, as reported by the Wall Street Journal and SpaceNews. Called Project Sunrise, the initiative aims to launch and operate a constellation of satellites that can deliver computing capacity for artificial intelligence uses. Project Sunrise’s satellites will be placed in sun-synchronous orbits at altitudes between 311 and 1,118 miles. Each layer in the constellation will have between 300 to 1,000 satellites and will be approximately 3 to 6 miles apart. In its filing, Blue Origin said the constellation would complement terrestrial data centers. The satellites will, of course, will be fitted with solar panels to be able to gather energy from the sun. Blue Origin explained that the orbital AI data center will lower the “marginal cost of compute capacity compared to terrestrial alternatives,” because the satellites will be powered by the sun, won’t need land and won’t need grid infrastructure. Project Sunrise will “enable US companies developing and using AI to flourish, accelerating breakthroughs in machine learning, autonomous systems and predictive analytics,” Blue Origin added. By filing its request with the FCC, Blue Origin has officially joined SpaceX in the list of companies looking to build an AI data center in space. In January, SpaceX asked the FCC for permission to deploy 1 million satellites for its constellation. The company justified at the time that “orbital data centers are the most efficient way to meet the accelerating demand for AI computing power.” This article originally appeared on Engadget at https://www.engadget.com/science/space/blue-origin-also-wants-to-put-ai-data-centers-in-space-115614142.html?src=rss

yesterday, 11:56

These AI notetaking devices can help you record and transcribe your meetings

These physical notetakers transcribe audio and give users summaries and action items of meetings using AI. Some even offer live translation.

yesterday, 11:31

Mark Cuban says he's joined the Mac Mini craze, using one to counter a flood of AI-generated emails

"It's not even like the cold emails because that's pretty obvious," Mark Cuban said. "It's people subscribing me to shit."

yesterday, 11:25

I Learned More Than I Thought I Would From Using Food-Tracking Apps

These apps, some of which use AI and computer vision, were helpful for meeting my caloric and nutrition intake goals. But they also gave me some anxiety.

yesterday, 10:30

The Importance of Behavioral Analytics in AI-Enabled Cyber Attacks

Artificial Intelligence (AI) is changing how individuals and organizations conduct many activities, including how cybercriminals carry out phishing attacks and iterate on malware. Now, cybercriminals are using AI to generate personalized phishing emails, deepfakes and malware that evade traditional detection by impersonating normal user activity and bypassing legacy security models. As a result,

yesterday, 10:00

LinkedIn Invited My AI 'Cofounder' to Give a Corporate Talk—Then Banned It

When social media is constantly exhorting people to use AI, what is the point of not letting AI agents participate?

yesterday, 10:00

I thought using AI and vibe coding could protect me from job cuts, but Amazon still laid me off. Here's what I learned.

An Amazon employee taught herself to vibe code and prompt engineer in the hope of protecting her job. It didn't, but she now runs her own business.

yesterday, 09:41

What parts of your job would you give to AI?

A recent study suggests that 93% of jobs will be impacted by AI, at least to some degree. We want to know what tasks you're happy to pass off.

yesterday, 09:19

China is putting OpenClaw to work in robots

Chinese companies are going full steam ahead with giving their robots the OpenClaw lobster fix, but US is still worried about AI going rogue.

yesterday, 07:33

OpenAI is putting ChatGPT, its browser and code generator into one desktop app

OpenAI is developing a “super app” for desktop that unifies ChatGPT, its browser and its Codex app, according to the Wall Street Journal and CNBC. A company spokesperson told the publications that OpenAI Chief of Applications Fidji Simo will lead the application revamp with assistance from OpenAI President Greg Brockman. Simo will also help the marketing team advertise the app when it comes out. OpenAI’s leadership is apparently hoping that combining several products can help it streamline user experience and dedicate its resources to one project. The company has yet to make an official announcement about the new app, but Simo replied to the Journal piece’s author on X. “Companies go through phases of exploration and phases of refocus; both are critical,” Simo said. “But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.” The Journal saw the internal note Simo sent to employees, wherein she said that the company realized it was spreading its efforts across too many apps and that it needed to simplify its efforts. “That fragmentation has been slowing us down and making it harder to hit the quality bar we want,” she reportedly wrote. In an all-hands meeting, CNBC said she also told employees that the company was “orienting aggressively” towards high-productivity use cases. It’s not clear yet when the unified app will be available, but OpenAI is reportedly focusing on developing agentic AI capabilities for it. The agents will be able to make decisions and use tools to do tasks on computers, such as writing software or analyzing data, with little human oversight. This article originally appeared on Engadget at https://www.engadget.com/ai/openai-is-putting-chatgpt-its-browser-and-code-generator-into-one-desktop-app-025709839.html?src=rss

2 sources

yesterday, 02:57

HR execs share a growing concern about doubling down on AI

HR executives say AI adoption is driving up costs, placing pressure on productivity gains and long-term profitability.

yesterday, 22:18

Alphabet no longer has a controlling stake in its life sciences business Verily

Alphabet's life sciences business Verily is restructuring and raising money as a new corporate entity. Verily announced that with its $300 million investment round, it will change from an LLC to a corporation and rename itself Verily Health Inc. As a result, Alphabet now has a minority stake rather than a controlling one in the business. Similar to every other tech business, this chapter for Verily will be focused on AI. “From research to care, our customers need solutions that bring the best of clinical and scientific rigor together with AI to deliver the next generation of healthcare - one that is as precise as it is personal," Chairman and CEO Stephen Gillett said. Google Life Sciences was renamed Verily in 2015, around the same time as Google also rebranded to Alphabet. It has worked on a wide range of projects over the years, such as using eye scans to predict heart disease and an opioid addiction center. In 2025, it closed its medical device division, a move that may have signaled its shift toward AI. This article originally appeared on Engadget at https://www.engadget.com/big-tech/alphabet-no-longer-has-a-controlling-stake-in-its-life-sciences-business-verily-221718631.html?src=rss

yesterday, 22:17

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI

The Amazon magnate has a new project centered around acquiring industrial firms and revamping them with AI technology.

yesterday, 22:09

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’

In this episode, we dive into Nvidia’s annual developer conference and what CEO Jensen Huang is saying about the future of the company.

yesterday, 21:43

Google is reportedly testing a Gemini app for Mac

Google is testing a version of its Gemini app for macOS, Bloomberg reports. The app would bring the AI assistant to uncharted territory, and in more direct competition with OpenAI's ChatGPT and Anthropic's Claude, both of which offer standalone Mac apps. Gemini remains accessible through the web, and it sounds like the macOS app offers the same set of features, with the ability to respond to prompts, search the web and generate text, images and code. The major differentiator of the Mac app could be a feature called "Desktop Intelligence," which gives Gemini a new source of information and context for its responses. According to a message in the app's code viewed by Bloomberg, "when you enable apps for Desktop Intelligence you are enabling Gemini to see what you see (such as screen context) and pull content directly from these apps to improve and personalize your experience only when Gemini is in use." The ability to refer to information in apps and what's currently on your screen is offered by both the Claude and ChatGPT macOS apps, and something Gemini is capable of on mobile devices. It's not clear if Gemini for macOS will be able to actually take action in the apps it can view — like, for example, Anthropic's popular Claude Cowork feature — but Google has already started offering that experience in a limited form on smartphones, so who's to say that couldn't come to desktop operating systems, too. Bloomberg reports that the Gemini app is being tested with non-Google employees, which could be a sign it's making its way to a public release. Thanks to Apple and Google's AI partnership, whether the app sees the light of day or not, some of the technology that makes Gemini possible will run on macOS in the future. Google and Apple announced in January that Google's Gemini models would power future versions of Apple Intelligence. Apple is also reportedly overhauling Siri into more of a chatbot, an experience likely made possible by Gemini. This article originally appeared on Engadget at https://www.engadget.com/ai/google-is-reportedly-testing-a-gemini-app-for-mac-203703372.html?src=rss

March 19 at 20:37

Online bot traffic will exceed human traffic by 2027, Cloudflare CEO says

AI bots may outnumber humans online by 2027, says Cloudflare CEO Matthew Prince, as generative AI agents dramatically increase web traffic and infrastructure demands.

March 19 at 19:09

OpenAI is acquiring open source Python tool-maker Astral

Codex maker says it will "continue to support these open source projects" after deal closes.

2 sources

March 19 at 18:44

Meta will move away from human content moderators in favor of more AI

A little more than a year after ditching third-party fact checkers and rolling back much of its proactive content moderation, the company says it will further "transform" its approach by drastically reducing the number of human moderators in favor of AI-based systems. The company says the change will happen "over the next few years," and will allow the company to catch more issues faster than its current approach. Meta didn't say how much of its contract workforce might be cut as it makes this transition. The company employs thousands of contractors around the world to review content flagged by its AI systems and user reports among other tasks. The company said that as it shifts its approach humans will "play a key role" in "critical decisions" and aid in training and other tasks. "Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions," Meta said in an update. "For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement." The company has been testing LLM-based systems for content moderation for a while and says that early tests have had "promising" results. Another advantage is that its AI can handle languages used by "98% of people online," compared with the 80 languages currently supported by its moderation capabilities. While Meta says its underlying rules aren't changing, the new approach could dramatically change users' perception of how Meta enforces its policies. The company already relies heavily on AI for certain rules, and many users believe that these systems make too many mistakes and make it difficult for their appeals to reach a set of human eyes. On the other hand, Meta, which stands to save a lot of money if it significantly downsizes its contract workforce, says its new systems make "fewer over-enforcement mistakes" and catch more of the most "severe" violations. In the nearer term, Meta is introducing an AI powered "support assistant" that will help users with certain types of account issues. The chatbot, which is rolling out now in the Facebook and Instagram app, will be able to help users report content and manage appeals, reset passwords and manage other account settings. It will also be able to help people who get locked out of their accounts "starting with select cases in the US and Canada." This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-will-move-away-from-human-content-moderators-in-favor-of-more-ai-183000435.html?src=rss

March 19 at 18:30

A rogue AI led to a serious security incident at Meta

For almost two hours last week, Meta employees had unauthorized access to company and user data thanks to an AI agent that gave an employee inaccurate technical advice, as previously reported by The Information. Meta spokesperson Tracy Clayton said in a statement to The Verge that "no user data was mishandled" during the incident. A […]

3 sources

March 19 at 18:20

Meta is having trouble with rogue AI agents

A rogue AI agent inadvertently exposed Meta company and user data to engineers who didn't have permission to see it.

March 18 at 23:42

A Meta agentic AI sparked a security incident by acting without permission

The Information reported that an AI agent within Meta took unauthorized action that led to an employee creating a security breach at the social company last week. According to the publication, an employee used an in-house agentic AI to analyze a query from a second employee on an internal forum. The AI agent posted a response to the second employee with advice even though the first person did not direct it to do so. The second employee took the agent's recommended action, sparking a domino effect that led to some engineers having access to Meta systems that they shouldn't have permission to see. A representative from the company confirmed the incident to The Information and said that "no user data was mishandled." Meta's internal report indicated that there were unspecified additional issues that led to the breach. A source said that there was no evidence that anyone took advantage of the sudden access or that the data was made public during the two hours when the security breach was active. However, that may be the result of dumb luck more than anything else. Many tech leaders and companies have touted the benefits of artificial intelligence, this is just the latest incident where human employees have lost control over an AI agent. Amazon Web Services experienced a 13-hour outage earlier this year that also (apparently coincidentally) involved its Kiro agentic AI coding tool. Moltbook, the social network for AI agents recently acquired by Meta, had a security flaw that exposed user information thanks to an oversight in the vibe-coded platform. This article originally appeared on Engadget at https://www.engadget.com/ai/a-meta-agentic-ai-sparked-a-security-incident-by-acting-without-permission-224013384.html?src=rss

March 18 at 22:40

Google Shakes Up Its Browser Agent Team Amid OpenClaw Craze

As Silicon Valley obsesses over a new wave of AI coding agents, Google and other AI labs are shifting their bets.

March 19 at 18:00

Google told staff worried about Pentagon AI deals that the company is 'leaning more' into national security contracts

Google DeepMind CEO Demis Hassabis told staff it's "incumbent on us to work with democratically elected governments."

March 19 at 17:56

Meta rolls out new AI content enforcement systems while reducing reliance on third-party vendors

Meta believes these AI systems can detect more violations with greater accuracy, better prevent scams, respond more quickly to real-world events, and reduce over-enforcement.

March 19 at 17:24

As a nonprofit-focused software company, AI tools help our clients fundraise with limited resources

At a software company focused on helping nonprofits fundraise, AI has helped streamline clients' donor management and fundraising processes.

March 19 at 17:15

Google declares 'vibe design' is here as Figma's stock price sinks

Google's Stitch AI tool is the latest shot across the bow of software makers, as AI companies roll out features directly challenging their products.

March 19 at 16:56

Meta isn't shutting down its VR metaverse after all

Meta is backtracking on its plans to shut down the VR version of its metaverse. The company now plans to support Horizon Worlds in VR for the "foreseeable future," though users shouldn't expect new games, CTO Andrew Bosworth said in an update. "We will keep horizon worlds working in VR for existing games, to support the fans who've reached out," Bosworth said in a post on Instagram. "For people who already have games they like that they're using in Horizon Worlds, [they] will be able to download the Horizon Worlds app and use it in VR for the foreseeable future." The reversal comes after Meta said earlier this week that Horizon Worlds in VR would no longer be accessible after June 15 as the company pivots its metaverse experiences to mobile. Though Horizon never gained mass appeal, even among VR enthusiasts, Meta's move to shut it down was just the latest sign of how the company has pivoted away from its metaverse ambitions as it chases AI "superintelligence." In his post on Instagram, Bosworth said there was "a lot of misinformation" about the company's plans. "We announced, 'hey, we're moving away from Horizon Worlds in VR,' and the headline is that Horizon is dead," he said. "It's not. And likewise, VR is not dead. We're continuing to invest tremendously." The company laid off more than 1,000 employees from its metaverse division and shut down three VR studios earlier this year. Bosworth said that the company is still working on its next two generations of VR headsets. He described the metaverse as a "misunderstood concept" that was never meant to only encompass virtual reality. He said that AR is also part of the vision and that even people scrolling their phones could be part of the metaverse. "When somebody is using their phone and you're physically with them, they're at the dinner table with you, and yet when you talk to them, they hear nothing because they've transported themselves through the glowing rectangle into a digital space," he said. "Maybe that they're scrolling media, maybe that they're in the text world, but like they have transported themselves. So we've always had this internally — at least me and Mark — this very expansive construct of the metaverse." This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-isnt-shutting-down-its-vr-metaverse-after-all-165520696.html?src=rss

March 19 at 16:55

TikTok is testing a new micro-drama feed, and its top shows feature AI zombies and sad polar bears

TikTok is testing a new micro-drama feed called "TikTok Short Drama" in the US and a few other regions. Some top shows are AI-generated.

March 19 at 16:30

ChatGPT’s ‘Adult Mode’ Could Spark a New Era of Intimate Surveillance

OpenAI plans to allow sexting with ChatGPT. A human-AI interaction expert warns of a privacy nightmare.

March 19 at 16:06

Andrej Karpathy says he's using his Jensen Huang hand-signed Nvidia chip system to power his Clawbot 'Dobby'

OpenAI cofounder Andrej Karpathy said he will use the AI supercomputer to power his OpenClaw bot, "Dobby the House Elf claw, among other tinkering."

March 19 at 15:53

Alexa+ launches in the UK

Amazon’s next-generation smart assistant has entered its Early Access program in the UK, marking Alexa+’s European debut following rollouts in the US, Canada and Mexico. Starting March 19, invitations to start using the smarter, more conversational Alexa will be sent out to "hundreds of thousands" of willing participants, Amazon said in a press release, adding that Alexa is the most popular voice assistant in the UK. As well as its more natural communication, agentic capabilities, contextual awareness and ability to remember previous conversations across devices, Amazon that users across the pond are getting an "authentically British" AI-powered assistant. It understands slang terms like "cuppa" and might even accuse you of taking the mick in the middle of a conversation. Can we rule out some cringe-inducing cockney impersonations? Absolutely not. It also distinguishes between, for example, how people in the UK say the date — "the 1st of April" — versus how it’s said in the US. Amazon said that engineers, linguists and speech scientists have worked together at the company’s Cambridge-based Tech Hub to ensure the voice assistant understands British users, with naturally flowing conversations being a crucial part of the Alexa+ experience. On the agentic side of things, the current lineup of UK partners will include OpenTable and, soon, JustEat, alongside existing partnerships with services like Spotify, Philips and Apple Music. Amazon also sources news from the likes of The Guardian and Future Publishing. UK-based customers who purchase a new supported Echo device will automatically qualify for Early Access, and if you already own one you can register here to receive an invite. You can also try Alexa+ on select Fire TV devices and in a web browser. During the Early Access period, which ran for nearly a year in the US before its nationwide rollout last month, Alexa+ will be free, and will remain free for Prime members. On its own it will cost £20 per month. As a reminder, Prime costs £9 per month in the UK (£95 annually) so it makes no sense whatsoever to pay more for Alexa+ exclusively when it's included in the main membership anyway. This article originally appeared on Engadget at https://www.engadget.com/ai/alexa-launches-in-the-uk-141058988.html?src=rss

March 19 at 14:10

Signal’s Creator Is Helping Encrypt Meta AI

Moxie Marlinspike says the technology powering his end-to-end encrypted AI chatbot, Confer, will be integrated into Meta AI. The move could help protect the AI conversations of millions of people.

March 19 at 14:09

Nothing Phone 4a Pro review: A midrange phone that rivals the Pixel 10a

Nothing takes a different tack with its phone series. For the second time in a row, its midrange entry-level A-series smartphones debuted ahead of its next flagship device. The company has even warned that we won’t be getting the Nothing Phone 4 until next year. Until then, the Phone 4a Pro is here to make an impact, with a more restrained design, a less obtrusive camera bump and specs that beat out last year’s Nothing Phone 3 — all for $499. In 2026, Nothing is truly aiming to dethrone the Pixel 10a. Hardware Nothing Phone 4a Pro review Mat Smith for Engadget It’s a new look. That’s often the case with Nothing’s smartphones as the company typically reimagines or rejigs what you can see through the clear back panel. This year, however, Nothing is making bigger changes: this is its first metal (aluminum) unibody phone. With a new periscope telephoto camera design, the jarringly thick camera bump of last year’s Phone 3a Pro is thankfully gone, resulting in a slice of smartphone that feels — and to some, looks — more premium and more refined than Nothing’s “flagship” Phone 3. However, compared to the Nothing phones that came before, it also feels muted, and a little safe. The playfulness of Nothing has been hemmed in a little. You might prefer it, but I’m not sure I do. Those identifiable Nothing design flourishes — red details, visible screens, lots and lots of circles — are now squeezed into a camera panel. This oblong area with curved corners houses a trio of cameras, a “Now Recording” red light and a tweaked Glyph Matrix, which we last saw on the Nothing Phone 3. This new Glyph Matrix is bigger and brighter, but at a lower “resolution,” that’s made of 137 mini-LEDs. That’s fewer than the Nothing Phone 3’s 489-strong dot-matrix, but the LEDs here are 100 percent brighter. So bright, in fact, that I had to turn them down to their lowest brightness when I was using them.The 4a Pro, however, lacks the rear button on the Phone 3 that lets you cycle through Glyph functions. Does this mean the company has made it easy to switch between Glyph toys and notifications in the phone’s UI? Sadly not. You can dip into the Glyph options through the main settings menu, but to change what the Glyph displays is hidden in a sub-tab. I also noticed that the offering of “toys” was limited, with fewer items than even the Nothing Phone 3 had at launch. Hopefully, this will expand once the phone officially launches. The 4a Pro packs a bigger screen than the company’s flagship, with a 6.83-inch AMOLED screen running at 1.5K resolution. It also has a higher refresh rate than the 6.67-inch Phone 3. And on top of that, the Phone 4a Pro’s display has a peak brightness of 5,000 nits, making it Nothing’s brightest smartphone yet. I’ve handled so many phones over the last four weeks that it’s often hard to discern the difference between brighter displays. Fortunately, I have the Nothing Phone 3 (and 3a Pro) to compare against the Phone 4a Pro. It’s noticeably brighter, and as we slowly get into sunnier weather, a smartphone that’s easier to read outdoors is always very welcome. The Phone 4a Pro also has improved IP65 water and dust resistance, while Nothing says it's 42 percent more bend-resistant than the Phone 3a Pro as well. It’s also almost 0.5mm thinner, if you ignore the camera bump for those measurements. Factor that in and the Phone 4a Pro is almost 1.5mm thinner than its predecessor. This design change also makes Nothing’s newest phone feel far less top-heavy than the 3a Pro. Regardless of the aesthetic changes, this is unmistakably refined hardware. Cameras Nothing Phone 4a Pro review Mat Smith for Engadget Besides the streamlined camera unit, with a new tetraprism periscopic lens that takes up less space, the Phone 4a Pro has improved imaging capabilities (almost) across the board. The new 50-megapixel periscope telephoto lens (which Nothing says also uses less power) has a 3.5x optical zoom, plus computational photography magic that can now crank it up to a (mostly unusable) 140x hybrid zoom. The main 50MP sensor also features a bigger sensor for improved low-light performance. With an f/1.88 lens though, it doesn’t quite match the Phone 3’s main camera (f/1.68), both on paper and in practice. The array is rounded out with an 8MP ultrawide camera, which sounds like the weakest link, but I rarely use the ultrawide cameras on any phone aside from review testing. Oddly, the selfie camera is a technical downgrade in resolution, with a 32MP sensor on the 4a Pro, down from 50MP on the 3a Pro. Nothing Phone 4a Pro sample photos Mat Smith for Engadget One new addition was co-developed by Google. Ultra XDR blends Android’s native HDR processing with Nothing’s own approach, capturing 13 RAW frames at different exposures and combining them to deliver greater dynamic range and detail. However, as proof of how new they are, your Ultra XDR images can’t be shared as easily. They do work with Google Photos and Instagram, at least. If it’s any consolation, Ultra XDR so far doesn’t seem hugely far away from typical HDR capture. I’ll keep testing the cameras and if I figure out where it really shines, I’ll update this review. Nothing Phone 4a Pro sample photos Mat Smith for Engadget If one thing disappoints on the 4a Pro, it’s recording video. Switching between zoom levels will often completely derail exposure settings. Even if you record on a single camera at the same focal length, exposure levels seem extremely sensitive and struggle to stay locked. Footage is often muddy and low-light performance isn’t great, even if using the Ultra XDR video mode. You aren’t forced to endure this with the Pixel 10a, but then again, there’s no zoom on Google’s mid-range phone — just a lossless crop. In more forgiving lighting, video is adequate, but quality drops off beyond the 3.5x optical zoom. Still, the versatility and quality of the still images from both the main camera and the telephoto lens put it above every other smartphone at this price. Performance and software The Phone 4a Pro is now powered by a more capable processor: Qualcomm’s Snapdragon 7 Gen 4. Nothing claims that, in addition to its own on-device optimizations, it improved CPU performance by 27 percent, GPU performance by 30 percent and AI performance by 65 percent compared to the Phone 3a series. There’s certainly a big difference in performance while gaming. While the 3a series struggled with more complex games, the 4a Pro kept up with Red Dead Redemption and Diablo Immortal. It’s not the most polished interpretation of Decard Cain and the lands of Sanctuary, but it's responsive and playable, even at 60 fps, with only a few frame drops. The Phone 4a has a 5,080mAh battery, roughly equivalent to its predecessor. It supports up to 50W fast charging, a tad faster than the Pixel 10a, though it lacks wireless charging support, unlike Google’s midranger. It’s one of the few signs that this isn’t Nothing’s “true” flagship, even if it looks the part. I was pleasantly surprised by the battery life, too. Typically, phones are getting increasingly bigger batteries, but as I mentioned, that’s not the case here.. However, the 4a Pro lasted 24 hours in our battery rundown test, five hours more than last year’s model. The Phone 4a Pro has all the software features either present or teased in older Nothing Phones. Essential Search is a system-wide search that can find terms in messages, files and the rest of your phone. There’s also a new Breathing Break widget; we definitely need that in 2026. Essential Memory is Nothing’s name for its background algorithms and analysis, scrutinizing your phone’s contents as well as whatever’s saved in Essential Space. Nothing has added cloud storage for Space, aimed at devoted upgraders, meaning everything you saved on older compatible Nothing phones can be transferred over. Sure, it’s a little niche, but it was an early frustration while testing the Phone 3 after the 3a series. If, for some reason, you have to reset your device, keeping everything in Space backed up elsewhere is a boon. Nothing Phone 4a Pro review Mat Smith for Engadget Also, while it’s technically a hardware tweak, Nothing has also moved the Essential Key to the left edge of the phone, making it far less likely to be triggered when you’re adjusting the volume and more in line with other phones and my own smartphone muscle memory. One caveat from previous Nothing devices remains. The company says it will deliver three years of Android updates and an additional three years of security patches. Compare that to Samsung’s seven years of Android updates for this year’s S26 series (and Google’s Pixel 10a), and you can see how it falls short. Wrap-up The Phone 4a Pro punches well above its $499 price tag. Nothing has successfully refined its hardware into a more premium, all-metal unibody, losing the jarring camera bump of its predecessor in favor of a sleek design that houses a genuinely impressive camera. The improved camera versatility, coupled with its class-leading 24-hour battery life and a more capable processor, makes this a serious threat to the Pixel 10a. However, some of Nothing's signature playfulness has been dialed back. The Glyph Matrix, while brighter, is lower-resolution and its “toys” are disappointingly limited at launch. The lack of wireless charging is another nod to its midrange status. Nothing’s Phone 4a Pro is a device with a clear identity, delivering on the essentials for half the price of many rivals. This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/nothing-phone-4a-pro-review-glyph-matrix-130042005.html?src=rss

March 19 at 13:00

Fitbit’s AI health coach will soon be able to read your medical records

Would you share your medical records with a personal trainer? How about a virtual one? Google, which this week announced it is giving Fitbit's AI health coach the ability to read your medical records, is hoping the answer is yes, following rivals like Amazon, OpenAI, and Microsoft in betting that users are willing to trade […]

March 19 at 12:27

Adobe’s AI image generator can now be trained on your own art

Adobe is launching customizable AI image generators that can mimic specific artistic styles and character designs. The Firefly Custom Models are available in public beta starting today, allowing creators and brands to train a model on their own assets to ensure generated images follow a consistent aesthetic for characters, illustrations, and photography. The tool aims […]

March 19 at 12:14

How Ceros Gives Security Teams Visibility and Control in Claude Code

Security teams have spent years building identity and access controls for human users and service accounts. But a new category of actor has quietly entered most enterprise environments, and it operates entirely outside those controls. Claude Code, Anthropic's AI coding agent, is now running across engineering organizations at scale. It reads files, executes shell commands, calls external APIs,

March 19 at 10:58

The Fight to Hold AI Companies Accountable for Children’s Deaths

After a series of suicides allegedly linked to AI chatbots, one lawyer is trying to hold companies like OpenAI accountable.

March 19 at 10:00

How we monitor internal coding agents for misalignment

How OpenAI uses chain-of-thought monitoring to study misalignment in internal coding agents—analyzing real-world deployments to detect risks and strengthen AI safety safeguards.

March 19 at 10:00

Miro's CEO says the company is hiring entrepreneurs for an edge in the AI era

Miro's Andrey Khusid said the company has brought on roughly 40 founders in the last two and a half years.

March 19 at 09:24

Chief Justice John Roberts says in the age of AI 'it's going to be really tough for young lawyers'

"The job of both young lawyers and partners is going to change," Chief Justice John Roberts said.

March 19 at 09:09

From bits to atoms: AI is shifting tech's center of gravity

AI's potential to commoditize software and other digital sectors is driving investors to embrace hard assets such as infrastructure and logistics.

March 19 at 09:00

How a Big Law heavyweight is shaping its AI rollout around Palantir's model

Ilona Logvinova, the head of AI at top law firm HSF Kramer, says it's changing the way lawyers use AI by taking a page out of Palantir's playbook.

March 19 at 09:00

The rise of the AI knock-off McKinsey consultant

Developers are building AI agents modeled on the work of consultants. We asked a former McKinsey consultant to test it out.

March 19 at 08:44

Multiverse Computing pushes its compressed AI models into the mainstream

After compressing models from major AI labs including OpenAI, Meta, DeepSeek and Mistral AI, Multiverse Computing has launched both an app that showcases the capabilities of its compressed models and an API that makes them more widely available.

March 19 at 08:00

The FBI confirms it's buying Americans' location data

During a Senate hearing, FBI Director Kash Patel confirmed that his agency has bought information that could be used to track individuals' movement and location. "We do purchase commercially available information that’s consistent with the Constitution and the laws under the Electronic Communications Privacy Act, and it has led to some valuable intelligence for us," he said. Law enforcement is required to obtain a warrant in order to get location data from cell service providers following the Carpenter v United States ruling from 2018. But why bother with all that hassle when they can just buy the information from the open market? "Doing that without a warrant is an outrageous end run around the Fourth Amendment, it’s particularly dangerous given the use of artificial intelligence to comb through massive amounts of private information," Sen. Ron Wyden, (D-Ore.) said during the Intelligence Committee hearing. Wyden is one of several lawmakers pushing for an overhaul of when and how the government can obtain citizens' personal information. It's an overhaul that's badly needed. Patel already has a history of dubious use of government resources, such as ordering SWAT protections for his girlfriend and somehow horning in on men's hockey victory celebrations at the recent winter Olympics, so one would hope he's not also stretching the limits of the few privacy protections that do exist. Then outside the FBI, we have the Department of Homeland Security being sued for illegally tracking immigration raid protestors and the Pentagon's labeling of Anthropic as a supply-chain risk after the AI company refused to let its products be used for mass surveillance of Americans. This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-fbi-confirms-its-buying-americans-location-data-230835196.html?src=rss

March 18 at 23:08

Kagi Translate's AI answers the question "What would horny Margaret Thatcher say?"

Remember when it was fun to play around with LLMs?

March 18 at 22:06

Google developers find that with AI, judgment is more important than JavaScript

Companies like Google are using AI to take over the bulk of coding. This gives developers more decision-making and oversight responsibilities.

March 18 at 21:05

Nothing CEO Carl Pei says smartphone apps will disappear as AI agents take their place

Nothing CEO Carl Pei says AI agents will eventually replace apps, shifting smartphones toward systems that understand intent and act on a user's behalf.

March 18 at 20:30

Updated at: 20:39