Blogroll

Threads is adding a Grok-like AI search feature

Mashable - Wed, 05/13/2026 - 20:41

Meta is bringing its AI chatbot to Threads in a way that should feel familiar to anyone who has spent time on X.

According to Engadget, the company is testing a new feature that gives Meta AI a dedicated Threads account — @meta.ai — that users can tag in posts and replies to add additional context to the discussion. The premise is essentially the same as Grok on X, where tagging the bot to fact-check or contextualize a viral post has become its own genre of reply-guy behavior.

SEE ALSO: Meta finally adds direct messages to the web version of Threads

The feature is currently in early beta and rolling out first to users in Malaysia, Saudi Arabia, Mexico, Argentina, and Singapore, per Engadget.

Meta's own blog confirms the broader rollout ambitions, noting that @meta.ai mentions in Threads posts and replies are part of a wider push to bring its new Muse Spark model across WhatsApp, Instagram, Facebook, Messenger, and Threads — showing up in search bars, group chats, and posts.

For users who would rather not have an AI bot surfacing under their posts uninvited, Meta says the @meta.ai account can be muted and its replies hidden.

The Threads feature is part of a larger set of announcements around Meta's revamped AI push. The company is also testing "side chats" on WhatsApp, which let users privately query Meta AI for context on what's happening in a group conversation without the response being visible to the rest of the group — a meaningful distinction from the Threads version, where Meta AI's replies are public.

The Grok comparison is an obvious one, and not entirely flattering.

Grok has had a rough run on X, generating pro-Nazi content, producing sycophantic output about Elon Musk, and surfacing child abuse material. Meta has generally maintained tighter guardrails on its AI products than X has with Grok, but giving any AI chatbot this kind of public-facing visibility on a social platform invites the same potential for bad behavior, and it's worth watching as the rollout expands.

Categories: IT General, Technology

These 7 shows were canceled way too soon—here's where to watch them

How-To Geek - Wed, 05/13/2026 - 20:30

There's nothing worse than investing hours into a great new TV series only to have the rug pulled out from under you by a network or streaming service canceling it just when things are getting good. In the current age of streaming, though, it's not enough to have the adoration of fans and a Rotten Tomatoes score through the roof—massive production budgets, shrinking viewership, and corporate takeovers are becoming more of a death knell to great shows than any number of downturned thumbs.

Categories: IT General, Technology

Google could work with SpaceX to launch its orbital data centers

Mashable - Wed, 05/13/2026 - 19:46

Remember Elon Musk's plan to put AI data centers in space?

It appears companies are taking the idea seriously. And one of those companies is Google.

According to a new report in the Wall Street Journal, Google is currently in talks with Musk's space exploration company, SpaceX, to strike a deal to launch rockets into space with the intent of putting data centers into orbit.

When SpaceX acquired xAI, Musk's AI company, earlier this year, Musk penned a statement explaining why he decided to combine his companies.

One big reason? Data centers in space.

While this was not the only reason, it was a main focal point for SpaceX's acquisition of xAI. SpaceX had recently filed with the Federal Communications Commission (FCC) seeking permission to launch "a million satellites" to put AI data centers into orbit.

"Current advances in AI are dependent on large terrestrial data centers, which require immense amounts of power and cooling," Musk explained at the time of the acquisition. "Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term, without imposing hardship on communities and the environment. In the long term, space-based AI is obviously the only way to scale."

Google seems to agree with Musk.

Late last year, Google announced Project Suncatcher, an initiative to launch prototype satellites by 2027 in order to "one day scale machine learning computer in space."

Then, in February, just weeks after SpaceX's acquisition of xAI, Google CEO Sundar Pichai shared that the company was looking into its own orbital data centers. 

While speaking at the AI Impact Summit in New Delhi, India, Pichai recounted how when growing up in India, he never imagined he'd "one day be spending time with teams figuring out how to put data centers into space." 

While Google is still exploring rocket launch options with other companies, the search giant wouldn't be the first to partner with SpaceX in hopes of putting new AI data centers in space.

Last week, Anthropic and SpaceX announced a partnership to utilize xAI's data centers in Memphis, Tennessee. The deal also includes future space development as well.

A deal with Google would also be extremely beneficial to SpaceX right now, as the company plans its $1.75 trillion IPO in the coming months.

Categories: IT General, Technology

Floppy disks, burned CDs, and tape drives: How we survived before the cloud

How-To Geek - Wed, 05/13/2026 - 19:30

I don't know how it happened, but somehow I pay for four whole terabytes of cloud storage every month. Sure, it's shared with the entire household, but over the years I went from a few gigs of free cloud storage to a substantial annual fee for a big hard drive in the sky.

Categories: IT General, Technology

Slice over $300 off the Mammotion Luba mini 2 robot lawn mower and take back your time this season

Mashable - Wed, 05/13/2026 - 19:29

SAVE $309: As of May 13, the Mammotion Luba mini 2 robot lawn mower is on sale for only $1,899 and comes with a mini garage to protect it. A $2,208 value, that's a savings of 14% or $309.

Opens in a new window Credit: Mammotion Mammotion Luba mini 2 robot lawn mower $1,899 at Amazon
$2,209 Save $310   Get Deal

If yard work is already dragging you down this season, it may be time to consider a robotic lawn mower. Similar to robot vacuums, these battery-powered, AI-enhanced mowers will take over your most annoying chore and give you hours of your time back every week. While they're not exactly new on the market, they are more advanced and accessible than ever — especially when you can find one on sale.

As of May 13, the Mammotion Luba mini 2 robot lawn mower is on sale for $1,899 at Amazon and comes with a mini garage for protection. A $2,208 value total, that's about 14% or $309 in savings. The mini garage is essentially a sleek, minimalist canopy to keep direct sunlight and heavy rain from damaging your mower.

The mini version of our friends at ZDNet's (also owned by Ziff Davis) favorite robot lawn mower, the Mammotion Luba mini 2 features 360-degree LiDAR and dual-camera AI Vision, which are both essential for object recognition and navigation precision. There's also dual cutting discs with automatic height adjustment, intelligent route planning, smart battery management, location tracking, and more.

Specs aren't everything when it comes to robot mowers, however. The most important thing is that it's a good fit for your yard. Considering this is a mini model, it can only provide coverage for up to 0.37 acres before needing a charge. Mammotion also says it's "designed for complex residential lawns," so it can climb steep slopes up to 80 percent grade and manage pot holes and tough terrain better than most. If this sounds like your yard, the Luba mini 2 might be the model for you. And if it's a good fit, we recommend grabbing it while it's $309 cheaper.

Categories: IT General, Technology

mimalloc: A new, high-performance, scalable memory allocator for the modern era

Microsoft Research - Wed, 05/13/2026 - 19:19
At a glance
  • Today’s critical services and applications are often highly concurrent, using hundreds of threads. They also operate at large memory scales, frequently hundreds of gigabytes, especially when using large language models.
  • mimalloc is an open-source, modern, scalable memory allocator that is a drop-in replacement for malloc and free. It is relatively small (~12K lines), with clear internal data structures, and is easy to build and integrate into other projects. It provides bounded worst-case allocation times (up to OS primitives), bounded space overhead, low internal fragmentation, and minimal contention by relying almost exclusively on atomic operations.
  • mimalloc is available on GitHub (opens in new tab) and has over 12K stars.
mimalloc

At the RiSE group at Microsoft Research (MSR), we conduct fundamental research into formal methods, programming languages, and software engineering (including emerging agentic systems), with a particular focus on systems that can be provably correct, secure, and performant. The mimalloc memory allocator was initially designed in 2020 as a fast allocator for the state-of-the-art Lean (opens in new tab) and Koka (opens in new tab) programming languages developed at RiSE, both of which use novel compiler-guided reference counting (see Perceus).

The scalable design of mimalloc has also proved to work exceedingly well for large services at Microsoft. Through close cooperation with product teams, mimalloc has significantly improved the response times in services such as Bing.. Today, mimalloc is widely used in large services and applications, both within and outside Microsoft. It serves as the allocator for NoGIL CPython 3.13+, is integrated into Unreal Engine, and is used in games such as Death Stranding. 

The project is open source on GitHub, with over 12K stars. Its Rust wrapper alone sees over 100K downloads per day.

mimalloc is effective across a wide range of scenarios; from small-scale applications like Koka or Lean, to large services with memory footprints exceeding 500 GiB and hundreds of threads.

Despite this range, the codebase is still compact, at around 12K lines of C. Reflecting its research origins, mimalloc emphasizes clear internal data structures with strong invariants, making it easier to understand and reason about than many industry allocators. As Fred Brooks already remarked in his famous book The Mythical Man-Month: “Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t need your flowchart; it’ll be obvious.”

As a result, mimalloc has been ported to many platforms—Windows, macOS, Linux, FreeBSD, NetBSD, DragonFly, and various consoles—, and is easy to build and integrate into other projects. For example, the clear data structures enabled Sam Gross and others to adopt mimalloc as the concurrent allocator for NoGIL CPython. The design also makes itrelatively straightforward to implement cyclic garbage collection on top of this.

The Fast Path

As with other scalable allocators (such as tcmalloc and jemalloc), a core design principle of mimalloc is that each thread maintains its own thread-local heap, which we call a “theap”. Each theap owns a set of mimalloc “pages,” which are usually 64 KiB. Each mimalloc page contains blocks of a fixed size, organized into size classes to reduce internal fragmentation. By giving each thread its own theap and set of mimalloc pages, memory allocation and deallocation typically proceed without synchronization. Atomic operations are only required when a thread frees a block allocated by another thread.

Moreover, in practice, most allocations are quite small, often less than 1 KiB. For such small allocations, mimalloc provides a fast path where the main allocation function looks like:

void* mi_malloc( size_t size ) { mi_theap_t* const theap = mi_get_thread_local_theap(); if (size > MI_MAX_SMALL_SIZE) return mi_malloc_generic(theap,size); // slow generic path const size_t index = (size + sizeof(void*))/sizeof(void*); // round size mi_page_t* const page = theap->small_pages[index]; mi_block_t* const block = page->free; // head of free list if (block == NULL) return mi_malloc_generic(theap,size); // slow generic path page->free = block->next; // pop free list page->used++; return block; }

By using thread-local theaps, we need no atomic operations or thread synchronization. We also try to minimize the number of branches. In particular, the thread-local theap is never NULL, and we initialize it with a special empty theap with all empty pages. This way, we do not need a separate check if the theap is NULL. Similarly, the pointers in the small_pages array are never NULL, and we use again special empty pages (with page->free==NULL) to avoid a separate check. Finally, pages are initialized with a free listrather than a separate bump pointer, avoiding special cases and enabling allocation by simply popping blocks from the free list. On x64, this code now translates into few instructions with just two uncommon branches:

mi_malloc: movq %rdi, %rsi ; rsi = size movq _mi_theap_default@GOTTPOFF(%rip), %rax movq %fs:(%rax), %rdi ; rdi = thread local theap cmpq $1024, %rsi ; size > MI_MAX_SMALL_SIZE? ja .LBB0_generic leaq 7(%rsi), %rax ; round to sizeof(void*) andq $-8, %rax movq 232(%rdi,%rax), %rcx ; rcx = heap->small_pages[index] movq 8(%rcx), %rax ; block = rax = page->free testq %rax, %rax ; block == NULL? je .LBB0_generic movq (%rax), %rdx ; page->free = block->next movq %rdx, 8(%rcx) incw 16(%rcx) ; page->used++ retq .LBB0_generic: jmp _mi_malloc_generic@PLT ; tailcall

Similarly, mimalloc provides a fast path for freeing blocks. In practice, most blocks are freed by the same thread that allocated the block. We can optimize that case by checking whether the current thread ID matches the thread ID stored in the corresponding mimalloc page. If so, we can just push our block on the page’s free list without requiring atomic operations or locks:

void mi_free(void* p) { mi_page_t* const page = mi_ptr_page(p); // get the page meta-data that contains p if (page==NULL) return; if (mi_thread_id() == page->thread_id) { // do we own this page? mi_block_t* const block = (mi_block_t*)p; block->next = page->local_free; // push on the `local_free` list page->local_free = block; if (--page->used == 0) mi_page_free(page); // is the entire page free? } else { mi_free_cross_thread(page, p); // free in a page owned by another thread } }

The mi_ptr_page function in the latest mimalloc v3 retrieves page metadata using an on-demand allocated map of the entire memory. In earlier versions this was faster using alignment tricks. However, in practice, invalid pointers are often passed to mi_free when overriding free globally.  

Using a separate map enables such cases to be detected efficiently and return NULL when the pointer is invalid. In particular, mi_ptr_page(NULL) == NULL, which avoids an extra branch by testing only if the page is NULL. Additionally, used count is used to efficiently detect when all blocks in a page have been freed. 

When a block is freed across threads, we enter the mi_free_cross_thread function—the first path that requires atomic operations: 

void mi_free_cross_thread(mi_page_t* page, mi_block_t* block) { mi_block_t* tfree = mi_atomic_load(&page->thread_free); // head of the thread free list do { block->next = tfree; // push our block in front } while (!mi_atomic_compare_and_swap(&page->thread_free, &tfree /*expect*/, block /*new*/)) }

The block can be freed by pushing it onto the thread-free list of the page. Since this is multi-threaded, it requiresan atomic compare-and-swap operation to push the block atomically. Still, on modern hardware such operations are efficient when uncontended, as their operation is integrated with the cache coherence protocol (MOESI).

Spotlight: Microsoft research newsletter

Microsoft Research Newsletter

Stay connected to the research community at Microsoft.

Subscribe today Opens in a new tab Free list mayhem

There are three free lists per page: the free list for allocations, the local_free list for freed blocks, and the thread_free (atomic) list for blocks that were freed across threads. This guarantees that after a fixed number of allocations, the free list is exhausted, ensuring we occasionally take the slower generic allocation path. This is also used to clean up the free lists by moving thread-local and local free lists back to the main free list. (Note: Actual implementation requires more care to handle cases where the owning thread never allocates again or is blocked for a long time).

Thus, mimalloc has three free lists per (64 KiB) mimalloc page, and effectively that means that a program can easily have thousands of free lists. This is essential to the scalability and cache locality of mimalloc.

A height-balanced tree A randomized tree

For this design, we took inspiration from randomized algorithms. For example, to balance a binary tree we can use smart strategies based on weight or depth, and perform specific rotations to keep it balanced. Such algorithms are usually quite complicated. However, we can also simplify the process and randomly decide on splits during insertion, and by sheer chance, we also end up with trees that are balanced enough.

Similarly, many multi-threaded allocators rely on sophisticated concurrent data structures to synchronize access to shared free lists. In contrast, mimalloc uses a per-page thread-free list, where any thread can push a block using a simple atomic compare-and-swap.

Because there are thousands of such lists, the probability that multiple threads concurrently free blocks to the same page is low. As a result, most push operations are uncontended atomic updates.

By organizing these lists per 64 KiB mimalloc page, cache locality is improved, as allocation tends to stay within the same page until it is full, regardless of freed objects in other pages.

In contrast, consider a design with a single free list per thread or process. When allocating a new structure while freeing objects of the same size—a common pattern in workloads such as tree transformations—allocation may reuse recently freed blocks scattered throughout memory, leading to reduced locality.

Sharing between threads

There is a fundamental tension between scalability and efficient memory sharing between threads. To scale optimally, we would give each thread exclusive ownership to its own pages to minimize any thread synchronization. On the other hand, that may lead to wasted memory: suppose a thread has large quantities of free blocks and another thread needs to allocate blocks of that size –without being able to share or steal those pages, we need to allocate fresh memory instead. In the other extreme, we could share all pages between all threads with a single lock: now memory use is optimal, but we no longer scale. The following benchmark results illustrate this tension:

1.1x commit, 56 Gib total 4x commit, 262 GiB total 1.3x commit, 262 GiB Total

The benchmark runs many tasks for a fixed amount of time using the Windows thread pool with about 800 active threads. The tasks alternate between allocation, deallocation, and brief blocking periods, simulating typical service workloads. In the graphs, the blue line represents the total live data, while the red line represents total committed memory by the allocator. The ideal situation is to have the red line as close as possible to the blue line. This is almost the case for the first graph, which uses the standard  system allocator: at the end there is just 1.1x more committed than live data – an excellent result! However, over the benchmark duration, it allocated a total of only 56 GiB data.

Contrast that with another highly concurrent allocator in the second graph, which was able to allocate 262 GiB over the benchmark duration—almost 4x as much. However, it also committed 4x more memory than the live data. In real workloads with larger memory footprints, such a ratio can quickly become unacceptable. Here we see that the standard allocator didn’t scale as well, but showed better cross-thread memory sharing.

The final graph shows the most recent mimalloc allocator. Like the second allocator, it allocates 262 GiB over the benchmark duration, while reducing committed memory to 1.3xthe live data, which achieves scalability and efficient memory sharing between threads. Similar to work-stealing in modern thread pool implementations, mimalloc uses a “page stealing” technique, allowing threads to take ownership of pages without expensive cross-thread synchronization.

These improvements were made in close collaboration with the Azure Cosmos DB team at Microsoft. A precise description is beyond the scope of this blog, but we will publish a technical report soon—stay tuned.

Opens in a new tab

The post mimalloc: A new, high-performance, scalable memory allocator for the modern era appeared first on Microsoft Research.

Categories: Microsoft

Home Depot launches a new compact tool line to take on Ryobi and Milwaukee

How-To Geek - Wed, 05/13/2026 - 19:06

In partnership with The Home Depot, RIDGID has announced an all-new, powerful yet compact cordless 18V power tool lineup to take on Ryobi, Milwaukee, DeWALT, and more. It's called the RIDGID NUKE Subcompact.

Categories: IT General, Technology

Its Masturbation May — here are the best deals Ive found so far

Mashable - Wed, 05/13/2026 - 18:57
Best Masturbation May sales 2026 at a glance: Masturbation May at Babeland Babeland Get up to 69% off select toys Shop Now Masturbation May at Bellesa Bellesa Get up to 60% off with code 60MAY Shop Now Masturbation May at Lovehoney Lovehoney Get up to 50% off select toys Shop Now

May is the best time to focus on yourself. The weather's warming up, some of you are graduating or starting new chapters, and the vibes (pun intended) are just generally good. It's no wonder Good Vibrations established National Masturbation Month in May 1995. (This was shortly after Bill Clinton, of all people, fired U.S. Surgeon General Dr. Joycelyn Elders for suggesting masturbation should be taught in sex ed. High school probably would have been a lot easier if it were, but I digress.)

SEE ALSO: I've tested 100+ sex toys. Here are the 15 most mind-blowing toys I've ever owned.

If you're looking for an excuse to buy a new sex toy ("just because" is also a very valid reason!), I've got about 15+ of them below. In honor of Masturbation May, I've tracked down all the best deals you can shop right now. From vibrators and dildos to clitoral suction devices and male sex toys, this is a comprehensive list of where to get your rocks off for less.

Best overall Opens in a new window Credit: biird Namii 2 $119 at biird
$159.99 Save $40.99   Get Deal Why we like it

OK, story time: About seven months ago, biird (one of my all-time favorite sex toy brands) removed the original Namii from its website. No warning, no email to explain why. The rumors were out there, of course, citing "legal reasons," but that's all I heard. Well, the drought is over. The Namii 2 (the upgraded version of the original) is available not only on the biird site but on Hello Nancy, too — and it's on sale! (Side note: Hello Nancy also carries my second-favorite clitoral suction toy, so it's a great place to browse.)

The Namii 2 will change your life. It feels like butterfly kisses on your vulva, and you'll never want to put it down. I can literally play with this toy for hours (or at least until it dies). It's a wonderful way to get wet before sex or to just enjoy yourself in the moment for a slow-burn solo sesh. Right now, you can get it for $119 (a little steep, I know, but trust me, it's worth it).

Honorable mention Opens in a new window Credit: Foria Intimacy Melts with CBD $16 at Foria
$20 Save $4   Get Deal Why we like it

This isn't a sex toy, but it did help me a lot before my second endometriosis surgery. If you haven't heard of Foria, you definitely need to read up. The brand specializes in creating plant-based products (read: cannabis) specifically for people with pelvic pain.

Foria has helped me through so many painful moments and genuinely helped me fall back in love with intimacy. When the company first started advertising, I decided to try their Intimacy Melts with CBD, and they literally melted my pelvic and back pain away enough for me to relax for penetration and intimate touch in general. Right now, you can get them (plus everything else on their site) for 25% off, no promo code needed.

More Masturbation May deals you should know about
Categories: IT General, Technology

Everything we know about Marvels VisionQuest

Mashable - Wed, 05/13/2026 - 18:47

The MCU has put Vision (Paul Bettany) through the ringer.

He died (twice!) in Avengers: Infinity War. Then, his grieving wife Wanda Maximoff (Elizabeth Olsen) resurrected him and threw him into WandaVision's many sitcom parodies. As if that weren't enough, WandaVision also introduced White Vision, who was created by S.W.O.R.D. to kill both Vision and Wanda.

SEE ALSO: 'Spider-Man: Brand New Day' trailer: Tom Holland yearns for Zendaya in action-packed first look

Instead of destroying each other, the Visions had a philosophical discussion about identity, after which resurrected Vision passed his memories and those of the original Vision on to White Vision. Now viewing himself as Vision, instead of S.W.O.R.D.'s weapon, White Vision flew off, never to be seen again in the MCU... until now.

Vision returns in Marvel's VisionQuest, coming this fall to Disney+. Here's everything you need to know about the upcoming series, from its plot to its release date.

What is VisionQuest about?

White Vision's identity crisis continues in VisionQuest, which sees him struggling to connect with the memories he gained at the end of WandaVision. As he goes on a reality-bending journey to understand who he truly is, he encounters personified versions of other programs that Tony Stark (Robert Downey Jr.) created. Footage from Marvel's 2025 New York Comic Con panel revealed these programs to include Henry Lewis as Dum-E, Jonathan Sayer as U, James D'Arcy as J.A.R.V.I.S., Orla Brady as F.R.I.D.A.Y., and Emily Hampshire as E.D.I.T.H.

James Spader returns to the MCU as Ultron in VisionQuest.

These programs aren't the only AI popping up in VisionQuest. Avengers villain Ultron (James Spader) is also back, albeit in a very different way. Spader isn't voicing Ultron's terrifying robotic body anymore. Instead, he'll be appearing in the flesh. During VisionQuest footage played during Disney's 2026 Upfront presentation, VisionQuest's take on Ultron seems more like a fatherly mentor figure than a sinister adversary. Perhaps he's just a figment of Vision's imagination, conjured up to guide him through his memories.

Who is in VisionQuest?

In addition to Paul Bettany and James Spader, VisionQuest also stars Todd Stashwick, T'Nia Miller, Emily Hampshire, Orla Brady, Henry Lewis, Jonathan Sayer, and James D'Arcy.

Ruaridh Mollica also stars in a role that will get WandaVision fans excited. He plays Tommy Maximoff, Wanda and Vision's speedster son. His twin brother Billy, also known as Wiccan, appeared in Agatha All Along, played by Joe Locke.

What is VisionQuest's release date?

VisionQuest hits Disney+ Oct. 14, 2026.

How to watch: VisionQuest premieres Oct. 14 on Disney+.

Categories: IT General, Technology

Google’s answer to MacBooks sounds amazing

Mashable - Wed, 05/13/2026 - 18:44

The name might be a mouthful of Os, Googlebooks could be Google’s biggest push yet into premium laptops. The devices are expected to compete directly with Apple MacBooks and Microsoft Surface hardware.

Categories: IT General, Technology

Move quickly to get the Blueair Mini Restful air purifier and sunrise alarm clock for under $60 at Woot

Mashable - Wed, 05/13/2026 - 18:43

SAVE $140: The Blueair Mini Restful air purifier and sunrise alarm clock is on sale at Woot for $59.99, down from the normal price of $199.99. That's a 70% discount.

Opens in a new window Credit: Blueair Blueair Mini Restful $59.99 at Woot
$199.99 Save $140   Get Deal

It's a rough time to consider buying yourself something nice as a little treat. Prices for nearly everything are rising and plenty of us are focusing on saving instead of spending. But if you're looking for a small self-care treat, check out this deal at Woot.

As of May 13, the Blueair Mini Restful air purifier and sunrise alarm clock is on sale at Woot for $59.99, down from the normal price of $199.99. That's a 70% discount or a savings of $140.

If your bedroom could use some extra air purification, today's deal at Woot on the Blueair Mini Restful is your sign to make the upgrade. Not only is this an air purifier, it's also a bedside lamp with adjustable light control. Since it's dimmable, it's a great light to have on while getting ready for bed. That same light can serve as a sunrise alarm come morning. Just set your wakeup time in the app and the Blueair will gently simulate sunrise 15 to 30 minutes before your alarm.

SEE ALSO: With this $199.99 Amazon deal, you could get two Shark TurboBlade fans for the price of one Dyson fan

As if that wasn't enough, there's also the ability to play soothing sounds from the Blueair during the sunrise simulation. Plus, there's a USB-C port for recharging your phone or earbuds on the nightstand while you sleep.

The Blueair Mini Restful uses a HEPA filter that's said to remove up to 99.97% of allergens from the air like dander, dust, and pollen. In a 140 square-foot room, it'll be able to refresh the air in under 13 minutes.

Perfect for creating your bedroom sanctuary or for adding to the nursery, the Blueair Mini Restful is a versatile air purifier, light, and sunrise alarm clock. Snag it from Woot while it's just $59.99. But keep in mind Woot deals tend to sell out quickly, so buy this one now if you're interested.

Categories: IT General, Technology

7 ways an HD Blu-ray is better than 4K streaming

How-To Geek - Wed, 05/13/2026 - 18:40

Let's be honest—physical media are on life support. Clearly, people prefer the convenience of streaming services, and that's perfectly understandable, but it also means a huge number of TV and film fans who have forgotten (or never knew) how much better even standard HD Blu-rays can be.

Categories: IT General, Technology

Googlebook's Intel partnership could put Microsoft in tough spot and challenge Apple too

How-To Geek - Wed, 05/13/2026 - 18:34

The newly announced Googlebook laptops are starting to look far bigger than just another Chromebook experiment. While Google did not reveal the core hardware specs, Intel has now announced its partnership with the platform, calling them “premium, powerful devices designed for intelligence”. Notably, in a separate interview, Google VP John Maletis confirmed that Googlebooks will also ship with Qualcomm and MediaTek processors.

Categories: IT General, Technology

Go big with the Jackery HomePower 3000 portable power station while its more than 50% off at Amazon

Mashable - Wed, 05/13/2026 - 18:32

SAVE $1,330: The Jackery HomePower 3000 is on sale at Amazon for $1,169, down from the normal price of $2,499. That's a 53% discount that matches the record-low at Amazon.

Opens in a new window Credit: Jackery Jackery HomePower 3000 $1,169 at Amazon
$2,499 Save $1,330   Get Deal

As the country heads into summer, plenty of us are making plans to enjoy the nice weather. But for those who live in an area prone to hurricanes, it's a good time to prep any items you might want to have on hand before a storm rolls in. If you've been eyeing a portable power station, there's a huge sale on a big model at Amazon.

As of May 13, the Jackery HomePower 3000 is on sale at Amazon for $1,169, marked down from the list price of $2,499. That's a 53% discount that shaves a major $1,330 off the price. It also matches the record-low at Amazon.

If you're in the market for a portable power station that's up for taking over during an outage, consider the 3,072Wh the Jackery HomePower 3000 offers. Like the name suggests, it's designed for keeping your home powered when grid power cuts out.

In addition to the convenience of over 3,000Wh capacity, it has a stead output of 3,600W and a surge all the way up to 7,200W. According to Jackery, this model will be ideal for power outages that last up to two days, keeping essential appliances powered up like the refrigerator. Jackery lists the HomePower 3000 as being capable of running a WiFi router for over 65 hours, a fan for 60 hours, or the fridge for one to two days.

SEE ALSO: The Anker Solix F3800 portable power station is $2,000 off — score a free solar panel right now

If you head out on RV trips, the TT-30 port will be an asset, as is the ability for the station to recharge with up to 500W of solar panels. With standard AC recharging, expect to get back to a full charge in 2.2 hours. This model also has Jackery's ZeroDrain technology which means it'll retain 95% of its charge even if it's left sitting for a year.

Before a summer storm brews and threatens to cut power, get the Jackery HomePower 3000. It'll be ready to keep essentials powered up which waiting for the grid to return.

Categories: IT General, Technology

Don't pay for an AI coding assistant until you've tried running one locally

How-To Geek - Wed, 05/13/2026 - 18:30

Models aren't good enough to work on their own, especially with all the nuance in bigger projects. Even while avoiding usage caps on Antigravity, you still need to give it your whole project for it to work. Basically, it feels like the only way to get a reliable AI agent for programming is through a paid extension. However, instead of paying someone else, you should run your own AI.

Categories: IT General, Technology

Googles big Android update goes all-in on AI: Everything to know

Mashable - Wed, 05/13/2026 - 18:20

Gemini Intelligence is designed to handle tasks across your apps and be a more helpful AI agent. It could signal a wider shift in how we'll use our phones in the near future.

Categories: IT General, Technology

5 exciting streaming shows that feel like those summer blockbusters we all used to love

How-To Geek - Wed, 05/13/2026 - 18:15

There are times in our lives when we are consumed by the media. While it’s not always the case, a big thing media lovers tend to have fond memories of are those summer blockbusters—you know the ones. Whether it's high-stakes action, funny comedies, or fantasies beyond your wildest imagination, these movies made us smile, laugh, and create fond memories with loved ones during those summer days.

Categories: IT General, Technology

This huge Ryobi deal gets you 6 tools for $299 at Home Depot

How-To Geek - Wed, 05/13/2026 - 18:09

If you have DIY projects around the house that need attention, now is the perfect time to buy some new tools. Ryobi fans will want to run to a nearby Home Depot for its spring sale and grab its latest deal that gets you six power tools for under $300.

Categories: IT General, Technology

This forgotten port was USB before USB existed

How-To Geek - Wed, 05/13/2026 - 18:00

Long before everyone got used to plugging things into a familiar rectangular port (and flipping the cable around three times before getting it right), there was another connector quietly doing most of the heavy lifting. The serial port wasn't pretty, it wasn't fast, and it definitely wasn't friendly, but for decades, it was the way you connected almost anything that wasn't already inside your computer case.

Categories: IT General, Technology

GridSFM: A new, small foundation model for the electric grid

Microsoft Research - Wed, 05/13/2026 - 18:00
Microsoft releases a lightweight foundation model that can predict AC optimal power flow in milliseconds, boosting efficiency and unlocking cost savings in grid analysis. At a glance
  • Microsoft introduces GridSFM, a small foundation model that approximates AC optimal power flow in milliseconds, unlocking decisions that can directly impact up to $20B/year in congestion losses and 3.4 TWh of renewable curtailment.
  • Beyond estimating generator dispatch and costs, GridSFM produces full AC system states, giving operators direct visibility into congestion, stability, and overall system health.
  • It provides a foundation for the community to build advanced power grid simulators and planning tools without recreating data or models from scratch.

Microsoft introduces GridSFM, a small foundation model for solving AC optimal power flow (AC-OPF) problems in transmission power grids. This follows our earlier release of a U.S.-based open transmission-topology dataset that powers GridSFM.

Power grids face increasing strain from surging demand, the need to integrate renewable energy sources, transportation electrification, and extreme weather events. Across all these challenges, the core question is the same: what are the optimal operating points that keep the grid functioning under each new condition?

Answering this requires solving AC optimal power flow (AC‑OPF), a complex, non-convex optimization problem that computes the cheapest generator dispatch (how much each generator produces) that meets demands while respecting power flow physics, voltage limits, thermal constraints, and stability requirements, and underpins core power system operations including reliability, real-time dispatch, market clearing, and contingency analysis. These decisions directly govern outcomes at the scale of up $20 billion per year in congestion costs (opens in new tab) and multi‑terawatt‑hour renewable curtailment (opens in new tab) (lost renewable energy due to congestion), making both economic efficiency and grid reliability highly sensitive to how well these operating points are found. However, AC‑OPF is computationally expensive: power utility scale grid can take up to hours solve, forcing a trade-off between solving a small number of carefully selected scenarios or relying on approximations that ignore critical physics, which can misestimate power flows and binding constraints and lead to suboptimal dispatch and degraded reliability under stressed conditions.

Spotlight: AI-POWERED EXPERIENCE

Microsoft research copilot experience

Discover more about research at Microsoft through our AI-powered experience

Start now Opens in a new tab

To address this limitation, we introduce GridSFM, a single neural network that approximates AC‑OPF in milliseconds across grids ranging from 500 to 80,000 buses. It takes standard AC‑OPF inputs (grid topology, generator and load specifications, transmission line constraints) and produces an operating point and a feasibility verdict (whether the system satisfies all physical and operational constraints). By removing the compute bottleneck, GridSFM makes it possible to evaluate orders of magnitude more scenarios in real time, enabling more informed decisions and shifting grid operations from reactive response to proactive optimization.

In this initial release we offer two tiers:

  • GridSFM-Open for research-scale grids up to 4,000 buses.
  • GridSFM-Premier for production-scale systems up to 80,000 buses.

The model is built as a block-structured discrete neural operator (Figure 1), representing each grid as a directed graph, with buses (connection points in the grid) and generators as vertices, and transmission and AC lines as edges. It is trained using both solver supervision, where reference solutions are generated using the AC-OPF solver (IPOPT in PowerModels.jl (opens in new tab)), and physics-based constraints that penalize violations of fundamental physical laws such as Kirchhoff’s voltage and current laws, as well as operating constraints like thermal limits. This enables the model to learn from both feasible and infeasible regimes. Most learning-based AC-OPF surrogates train one model per grid on a narrow distribution (opens in new tab). GridSFM takes the opposite approach: in this release a single model trained across 150+ base grid topologies (network structures) and roughly half a million scenarios spanning varying load profiles, multi-element outages, line-rating derates, voltage-bound tightening, and different generator cost coefficients, so the model is forced to generalize rather than memorize. Across the 54-grid mix test scenarios for GridSFM-Open, our model achieves a median cost gap of 2.23% vs solver ground truth labels (mean 3.41%; <5% gap on 83 % of scenarios). When more precision is needed, GridSFM’s prediction also serves as a warm start seed for traditional numerical solvers, GridSFM-seeded-warm beats cold solve by 1.66× geometric mean across the same test scenarios and beats the industry-standard DC-OPF warm-start by 1.59× geomean (per-grid breakdown and full white-paper analysis to follow).  Geometric mean, otherwise known as the multiplicative average, is used here since it is more robust to outliers. Our model also demonstrates the ability to adapt to new grids with just a handful of fine-tune scenarios.

Figure 1. GridSFM architecture. Bus, generator, and branch features are embedded into a shared latent space, then refined by a stack of attention blocks operating directly on the grid topology. Output heads decode the latent state into (i) a full AC-OPF operating point, bus voltages and angles, generator dispatch, branch flows, and (ii) a per-scenario feasibility score. What it enables

A common pattern in grid operations and planning is having to choose between solving a small, hand-picked set of scenarios accurately with full AC-OPF or running thousands of scenarios through a faster approximation that drops parts of the physics. For example, a commonly used tool is the DC-OPF approximation, a linearized version that assumes flat voltage magnitudes and small angle differences and ignores reactive power and losses. DC-approximation solves in seconds what takes full AC minutes to hours, which is why most contingency screens, market-clearing pre-stages, and planning sweeps run on DC-approximation today. The cost is real: DC-approximation ignores voltage and reactive constraints entirely, and its dispatch cost can run >10% off the AC optimum on stressed scenarios (with worst-case grids out past 20% in our test benchmark).

GridSFM is designed as a drop-in alternative to DC-approximation in that fast approximation slot, and unlike most existing AC-OPF neural surrogates, which require a fresh training run for every new topology, GridSFM generalizes across grids in its supported size range without per-topology retraining, so it slots in as universally as DC-approximation. Especially when compared with DC-OPF, GridSFM has three concrete advantages:

  • Same accuracy class as DC-approximation on standalone dispatch cost. GridSFM and DC fall within the same per-scenario cost-gap distribution (§2 / Figure 6), with complementary failure modes: DC fails on grids where its no-loss / no-reactive linearization is structurally wrong; GridSFM fails on grids outside its training distribution. The two limitations close along orthogonal axes. DC’s ceiling is fixed by the linearization, whereas GridSFM’s tail closes with more training data.
  • 1,000× faster than a full AC solver and approximately 100× faster than DC-approximation at the inference step, fast enough to sweep thousands of contingencies (e.g., line or generator outages) in minutes on a single commodity GPU.
  • A real AC operating point, not a linear approximation. GridSFM produces voltages and reactive power, so the same prediction can be handed to a traditional numerical solver as an AC warm-start, opening a workflow DC-approximation cannot.
1. Feasibility screening: stress-score triage

A scenario is infeasible when no dispatch satisfies all constraints simultaneously: the requested load cannot be served within voltage bounds, thermal limits or generator capacities. Operationally, infeasibility is the most consequential failure signal: the requested operating condition cannot be served at all, and the response is intervention (load shedding, redispatch, relaxing thermal limits). It is also the most expensive class of scenario to screen, because the solver only learns a scenario is infeasible after iterating to non-convergence: each infeasible case costs a full solver run, often longer than a feasible one. Sweeping thousands of contingencies or stress cases to identify the infeasible ones is therefore one of the worst-case budgets in any planning workflow.

GridSFM addresses this with a per-scenario stress score trained jointly with the dispatch head. We evaluate the score on three classes of scenarios on each grid: real-feas are scenarios the AC-OPF solver successfully converged on (i.e., genuinely feasible operating points), real-infeas are scenarios the solver failed to converge on (genuinely infeasible operating points), and synth-infeas are feasible base points we deliberately perturbed to violate a specific constraint (voltage squeeze, thermal bottleneck, angle tightening, or DC-thermal congestion). Across the 54-grid test scenarios, the stress score’s per-grid binary accuracy is broadly uniform across classes: real-feas (green) mean 94.5%, real-infeas (red) mean 96.1%, synth-infeas (orange) mean 90.4%. Most grids cluster within a few points of the means; outliers below 80% are the same hard grids that show up in cost-gap analysis below.

Figure 2. GridSM per-grid feasibility prediction accuracy across the 54-grid test scenarios, broken out by class (real-feas, real-infeas, synth_infesible). Filled KDE + per-grid dots, with mean (–) and median (:) light dashed lines. The three distributions overlap heavily, the model’s quality is broadly uniform across classes, with a small failing tail of structurally hard grids.

Drilling into a case study. Let’s zoom into a single representative grid, the Texas2k summer-peak grid (opens in new tab), to show how the learned representation separates feasibility and ROC for predicting.

Representation. Figure 3 visualizes the model’s learned representation of each Texas2k scenario. We project the per-graph representation (128-dimensional) onto two axes (LD1, LD2) chosen to maximally separate the scenario classes: real-feasible, real-infeasible, and synthetic-infeasible. Squeezing 128 dimensions into 2 inevitably loses information, so this view exaggerates apparent overlap: classes that look mixed here may still be cleanly separable in the full 128-dimensional space the model uses. The shaded cloud shows where graphs of each class concentrate, and the cross at the center of each cloud marks the class centroid, the average position of all graphs of that class. Centroids that sit far apart mean the model treats those classes as clearly distinguishable. Where two shaded clouds overlap, the model is producing similar embeddings for graphs with different labels.

Figure 3. Linear discriminant projection of grid embeddings on the Texas2k scenarios. Real feasibles (green), real infeasibles (red), and synthetic infeasibles (orange), projected onto two axes (LD1, LD2) chosen to maximize between-class separation. Crosses mark class centroids; shaded clouds show where each class concentrates. Overlap between clouds means the model produces similar embeddings for graphs in those classes; in the full 128-dimensional space the model may still separate them along directions not shown.

Operation and ROC. The score itself is continuous and ranking-calibrated. Figure 4 shows the ROC over its test mix: AUC = 0.986. At the natural operating point the same score, thresholded as a binary classifier, yields 95.5% accuracy. Per-mode detection at that threshold is 99–100% on the three perturbation modes that drive a constraint cleanly past its limit.

Figure 4. ROC curve of the GridSFM stress score for feasibility on the Texas2k summer-peak test mix (real feasibles + solver-labeled infeasibles + synthetic perturbation modes that drive a constraint past its limit). Area under the curve = 0.986, binary accuracy 95.5% at the natural operating point. The score is calibrated for ranking; where to draw the binary cutoff is an operator choice. 

Triage cutoff. For routing scenarios into action buckets, Figure 5 shows the stress-score distribution per population. Operators pick the cutoff that matches their workflow: very-confident feasibles pass through to indicative dispatch; very-confident-stressed scenarios are flagged for engineering review; the borderline middle band is sent to the solver for verification. The cutoff sets the balance between solver budget and screening miss-rate.

Figure 5. Distribution of the model’s feasibility logit on the same Texas2k test scenarios, split by population: real-feasibles (green), real-infeasibles (red), and synth-infeasibles (orange). The dashed vertical line is the decision boundary where logit=0. Samples to the right are predicted feasible. At this operating threshold, real-feasible pass through at 99.5%, real-infeas are correctly flagged at 90.4%, and the synthetic perturbation are caught at 88-100%. 2. GridSFM as a fast approximation

GridSFM’s prediction can be used in two ways without producing an exact AC-OPF solution from scratch: as a standalone dispatch and cost estimate, or as the initial guess (warm-start) for an exact numerical solver. We compare both against the same two reference points throughout: full AC-OPF (the ground-truth optimum) and DC-approximation (the established fast baseline). All numbers below come from the same test set of 54 grids scenarios GridSFM-Open, with solver solve_time measured per scenario under single-core CPU pinning.

Standalone cost estimate

When an exact solver round-trip is not required, GridSFM’s predicted dispatch can be costed directly. In our test set, GridSFM-Open and DC-approximation fall in the same accuracy class: comparable means (DC 2.80%, GridSFM 3.41%), comparable medians (DC 1.81% vs GridSFM 2.23%), and overlapping per-scenario distributions across two decades of cost gap (Figure 6). They have complementary failure modes rather than one dominating the other.

Figure 6. Per-scenario cost-gap distribution from AC-OPF ground truth: DC-approximation (blue) and GridSFM (green) across the 54-grid GridSFM-Open benchmark. Filled KDE + per-scenario dots underneath; light dashed lines mark mean (–) and median (:). DC: mean 2.8%, median 1.81%, <5% gap on 90% of scenarios. GridSFM: mean 3.41%, median 2.23%, <5% gap on 90% of scenarios. The two distributions overlap heavily in the body — methods are in the same accuracy class with complementary failure modes. Reference dashed line at 5%.

Both distributions look the same in shape: a single peak in the 2–3% gap range, with the bulk of scenarios under 5% and a small tail of outliers extending out into the >25% range. The outlier tails come from different sources: DC fails on grids where its no-reactive linearization is structurally wrong (case1803_snem and a handful of meshed transmission grids); GridSFM’s outliers are concentrated on a few of our open sourced grids whose AC-OPF reference itself required additional constraint relaxation to become feasible (opens in new tab), so the ground-truth target on those grids is noisier and the gap partly reflects reference-side instability. The two limitations close along orthogonal axes: DC’s ceiling is fixed by the linearization and does not improve with more data or compute; GridSFM’s tail closes with cleaner reference labels and more training data on those grid families.

The differentiating value of GridSFM is therefore not the standalone cost number, but that GridSFM produces a full AC operating point including voltages and reactive power. This allows operators to directly assess the state of the grid. This is important since the feasibility and security of a system is often determined by the voltage and reactive power limits, but neither are considered in DC-OPF.  At the same time, the operating point also enables the warm-start workflow, as we describe next.

Warm-start handoff

An AC-OPF solver works by iteratively refining an initial guess of the operating point until the optimality conditions are satisfied, and the number of refinement iterations it needs depends directly on how close the initial guess starts to the true optimum: a poor starting point can require thousands of iterations, a near-optimal one only a couple. A cold start (also known as a flat start) sets voltage magnitude to 1.0 per unit and angle to zero  on every bus, so the solver does the full amount of work. A warm start replaces that generic value with a closer estimate to make the solver converge faster. DC-approximation warm-start solves the linearized DC-OPF version of the problem first and seeds the AC solver with that solution. Whereas, GridSFM warm-start runs a single forward pass through the model and seeds the solver with its predicted voltage angles and active dispatch. The absolute ceiling on how much any warm-start can help is what we call the GT (ground-truth) ceiling: we run the full AC-OPF solve once at high precision to find the true optimum, then re-run the solver with that exact solution as the warm start seed. This is the practical limit on solving time and therefore the ceiling on speedup. 

Figure 7. Warm-start speedup over AC-OPF cold start, across the 54-grid test set (log-scale x axis). GridSFM (green, sits cleanly right of the cold-start reference) achieves a geomean speedup of 1.66×, and outperforms cold start on 41 of 54 grids ; DC-approximation (blue) achieves a geomean speedup of 1.04× and improves performance on 34 of 54 grids; the GT ceiling (gold,  geomean 2.72×) is the upper bound on warm-start headroom. Each method’s ratio is computed within the same Julia process to remove cross-run timing noise. 

Our profile showed that GridSFM warm-start is 1.66× faster than cold start and 1.59× faster than DC-approximation warm-start (geometric means across the 54 grids test scenarios) and is faster than both baselines on 41 of 54 grids. The largest per-grid speedups exceed 7× over cold on the meshed transmission grids (Texas2k summer-peak, case2742_goc). DC-approximation warm-start, by contrast, is a wash on average across this broader grid mix (geomean 1.04× vs cold), DC saves on AC iterations on some grids and spends them rebuilding voltage/reactive on others.

The gap between the GridSFM distribution in Figure 7 and the GT-ceiling distribution (2.72× geomean) can be closed by improving GridSFM’s residual reactive-power and voltage prediction error, both targeted by the next release.

Generalization

We tested whether GridSFM-Open acts like a true foundation model by running it on a grid it had never seen before: the 6,470-bus case6470_rte from OPFData (opens in new tab), about 1.4× larger than any grid in training.

In a zero-shot setting, performance drops as expected. Cost error increases from 3.35% in-sample to about 14% on the new grid. Voltage predictions capture only about 27% of the true variation and appear nearly flat. The feasibility classifier flags every scenario as infeasible. Even so, the model still preserves the correct ordering of costs across scenarios.

With light fine-tuning, performance recovers quickly. After 10 epochs on 1,000 scenarios, cost error falls to 1.12%, voltage variation reaches 91% of the true signal, and feasibility detection becomes nearly perfect. An N-1 contingency split that was fully held out during fine-tuning matches the full-topology results within 0.2 percentage points on all metrics, showing that adaptation transfers across contingencies.

The model adapts even with very limited data. With just 10 scenarios, cost errors are 1.76% and feasibility detection exceeds 90%, with strong results already on cost and active power dispatch. Voltage magnitude is slower to recover and needs closer to 1,000 scenarios (see Table 1).

This test showed that GridSFM-Open already captures AC-OPF physics during pre-training. Adapting to a new grid is mostly a matter of calibration rather than relearning. The released checkpoint can therefore serve as a practical starting point for users to fine-tune on their own topology and tasks.

Fine-tune scenariosCost errorFeasibility Detection0 (0-shot)14%0 (Collapsed)101.76%92%1000.88%97%10001.12%99%Table 1: Few-shot fine-tuning of GridSFM-Open on case6470_rte (held-out test split, 10 epochs per row): even ~10 scenarios already give useful cost and feasibility predictions. Looking ahead

Active directions for the next release:

  • Generalization. Tighter accuracy on grids and operating conditions outside the training mix. The current out-of-distribution analysis is in the white paper.
  • Continued accuracy improvements across all prediction channels, narrowing the residual gap between Figure 7’s GridSFM distribution and the gold GT-ceiling.
  • Multi-snapshot extensions. Unit commitment (discrete on/off generator decisions across time), weather-conditioned scenario generation, dynamic-stability surrogates.

We previously released the GridSFM_US _Powergrid_dataset (opens in new tab). This release adds the first open AC-OPF model that supports multiple grid topologies, completing a stack of open topology data, open code, and open weights for ML-driven grid simulation and planning. We see it as a starting point for the community to build richer simulators, planning workflows, and decision-support tools without re-creating the data or the model from scratch. The applications we expect to see most leverage from are the ones where the cost of a single solve has historically forced cherry-picking: contingency screening, transmission expansion planning, demand-siting analysis, and resilience studies under extreme weather. 

Everything in the GridSFM-Open tier is released for research use today:

GitHub Hugging Face White Paper Project Page

A note on GridSFM-Premier. The larger production-scale tier is not part of this open release. If you are interested in evaluating it, collaborating with us, or otherwise getting access, please contact us at gridFM@microsoft.com.

Opens in a new tab

The post GridSFM: A new, small foundation model for the electric grid appeared first on Microsoft Research.

Categories: Microsoft
Syndicate content

eXTReMe Tracker