Final Cut Pro uses Apple’s latest chips to improve face and object tracking

Following the recent launch of the new M3-equipped MacBook Pros, Apple will soon be releasing an update for its Final Cut Pro to make further use of its own silicon. According to the company, its updated video editing suite will leverage a new machine learning model for improved results with object and face tracking. Additionally, H.264 and HEVC encoding will apparently be faster, thanks to enhanced simultaneous processing by Apple silicon’s media engines.

On the user experience side, the new Final Cut Pro comes with automatic timeline scrolling, as well as the option to simplify a selected group of overlapping connected clips into a single storyline, and the ability to combine connected clips with existing connected storylines. As for Final Cut Pro for iPad, users can take advantage of the new voiceover recording tool, added color-grading presets, new titles, general workflow improvements and stabilization tool in the pro camera mode. Both the Mac and iPad versions of Final Cut Pro will receive their updates later this month.

With Logic Pro’s new Quick Sampler Recorder mode, users can create sampler instruments from any sound using the iPad’s built-in microphone or a connected audio input.

Apple

For those who need to focus on music creation, Apple has also updated Logic Pro with some handy new tools. For both the Mac and iPad versions, there’s a new Mastering Assistant which claims to help polish your audio mix, by analyzing and tweaking “the dynamics, frequency balance, timbre, and loudness.” You can use this tool to refine your mix at any point throughout the creation process. Another good news is that to avoid digital clipping and to boost low-level sensitivity, both flavors of Logic Pro now supports 32-bit float recording when used with compatible audio interfaces.

If you’re a fan of “Sample Alchemy” — a sample-to-instrument tool — and “Beat Breaker” — an audio multi-effect plug-in — on Logic Pro for iPad, you’ll be pleased to know that both features have been ported over to Logic Pro for Mac. Similarly, the Mac app has gained two free sound packs, “Hybrid Textures” and “Vox Melodics,” which can be found in the Sound Library. Some may also find the new “Slip” and “Rotate” tools in the “Tool” menu handy.

Meanwhile, the updated Logic Pro for iPad offers a better multi-tasking experience. The app now supports iPadOS’ “Split View” and “Stage Manager,” thus letting you quickly drag and drop audio samples from another app — such as Voice Memos, Files or a browser — into Logic Pro. There’s also a new “Quick Sampler” recorder plug-in for easily creating sampler instruments from any sound, via the iPad’s built-in microphone or a connected audio input. This update, along with a handful of related in-app lessons, are available immediately.

This article originally appeared on Engadget at https://www.engadget.com/final-cut-pro-uses-apples-latest-chips-to-improve-face-and-object-tracking-065025314.html?src=rss 

Lucid EVs will be able to access Tesla’s Superchargers starting in 2025

Lucid’s electric vehicles will be able to plug into over 15,000 Tesla Superchargers in North America starting in 2025. The automaker is the latest entry in the growing list of companies pledging to support the North American Charging Standard (NACS), also known as the Tesla charging standard. Lucid will give customers access to a NACS adapter for its current vehicles, which are equipped with the Combined Charging System (CCS), in 2025. The company intends to start building NACS ports into its EVs within the same year, as well, so that newer models no longer need to use adapters.

Ford was the first automaker to announce this year that it was going to give its customers access to Superchargers after the White House convinced Tesla to share its charging network with vehicles from other companies. In the months after that, Mercedes, Volvo, Polestar, Honda, Toyota (and Lexus), BMW, Hyundai and Subaru revealed that they will also give their customers access to NACS adapters and will ultimately incorporate the standard into their vehicles over the next two years. 

As TechCrunch notes, Lucid vehicles use a 900-volt charging architecture, which became the basis of a Lucid Air promotion that called it the “fastest charging electric vehicle ever.” At the moment, most Superchargers are rated at around 500 volts, and that means charging times won’t be as fast as the company promises. That said, Tesla has started deploying V4 Superchargers that offer higher voltage charging in the US, and supporting NACS could convince potential customers in the region to purchase Lucid EVs. As company CEO Peter Rawlinson said, “[a]dopting NACS is an important next step to providing [its] customers with expanded access to reliable and convenient charging solutions for their Lucid vehicles.”

This article originally appeared on Engadget at https://www.engadget.com/lucid-evs-will-be-able-to-access-teslas-superchargers-starting-in-2025-055045292.html?src=rss 

WeWork files for Chapter 11 bankruptcy protection

There has been another twist in the WeWork saga as the office space rental company has filed for bankruptcy protection. Following reports last week that the company was expected to file for Chapter 11 protection, WeWork’s shares were halted on the New York Stock Exchange (NYSE) on Monday. According to The New York Times, it described its bankruptcy filing as a “comprehensive reorganization” of its business. “As part of today’s filing, WeWork is requesting the ability to reject the leases of certain locations, which are largely nonoperational, and all affected members have received advanced notice,” the company told the publication in a statement. 

A number of factors played into WeWork’s fall, including trying to grow too fast in its early days. The company has attempted to cut costs in recent years (including by closing several co-working spaces in the wake of COVID-19 lockdowns) while its revenue has grown. 

However, WeWork has been toiling in a real estate market that has felt the pinch of inflation and the rising costs of borrowing money. It has also been contending with another pandemic-accelerated change as millions more people are opting to work remotely instead of going to their company’s offices. In its most recent earnings report in August, WeWork said it had “substantial doubt” about its ability to remain operational.

WeWork first attempted to go public in 2019, though it withdrew plans for an initial public offering after investors expressed concerns over profitability and corporate governance. Its S-1 filing showed losses of over $900 million for the first half of 2019 and indicated that WeWork was on the hook for over $47 billion worth of lease payments — WeWork takes out long-term leases on office space and rents it to workers and companies on a short-term basis.

That fiasco led to Softbank, which at one point led an investment round into WeWork when it had a valuation of $47 billion, taking control of the company. Softbank pushed out co-founder and CEO Adam Neumann with an exit package that was said to be worth $445 million.

The business eventually went public in 2021 after it merged with a special-purpose acquisition company. WeWork shares cost more than $400 two years ago, but by Monday the price had dropped to under $1.

WeWork has made more attempts to steady the ship. In September, the company completed a reverse stock split. It said this was conducted to help it continue to comply with the $1 minimum share closing price required to stay listed on the NYSE.

Later that month, WeWork said it would try to renegotiate the vast majority of its leases. At the time, CEO David Tolley pointed out that the company’s lease liabilities amounted to over two-thirds of its operating income in the second quarter of this year.

On October 31, WeWork said it would withhold some interest payments — even though it had the cash to make them — in an attempt to improve its balance sheet. The company then entered a 30-day grace period before an event of default.

Meanwhile, Neumann has a new real estate venture, this time focused on residential rentals. It emerged last year that he had bought more than 3,000 apartments in Miami, Fort Lauderdale, Atlanta and Nashville. Flow, the company that will manage those properties, has reportedly received an investment of $350 million from venture capital firm Andreessen Horowitz.

This article originally appeared on Engadget at https://www.engadget.com/wework-files-for-chapter-11-bankruptcy-protection-030708470.html?src=rss 

Meta reportedly won’t make its AI advertising tools available to political marketers

Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts. At the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser’s video content. Reuters reports Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle. 

Meta’s decision to bar the use of generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company, “has not yet publicly disclosed the decision in any updates to its advertising standards.” TikTok and Snap both ban political ads on their networks, Google employs a “keyword blacklist” to prevent its generative AI advertising tools from straying into political speech and X (formerly Twitter) is, well, you’ve seen it

Meta does allow for a wide latitude of exceptions to this rule. The tool ban only extends to “misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire,” per Reuters. Those exceptions are currently under review by the company’s independent Oversight Board as part of a case in which Meta left up an “altered” video of President Biden because, the company argued, it was not generated by an AI.

Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntary commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. Those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and with the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated. 

This article originally appeared on Engadget at https://www.engadget.com/meta-reportedly-wont-make-its-ai-advertising-tools-available-to-political-marketers-010659679.html?src=rss 

PS5 and PS4 are losing X sharing options on November 13

PlayStation 5 and PlayStation 4 consoles will soon drop their X (formerly Twitter) integrations. As such, after November 13, you’ll no longer be able to post clips or screenshots directly to X from either system.

According to a notice Sony shared on its consoles (as noted by Wario64) and a support page, users will lose the ability to “post and view content, trophies and other gameplay-related activities on X directly from PS5/PS4 (or link an X account to do so).” Sony added the notice to its website at some point on Monday, according to a cached version of the support page.

Sony hasn’t revealed exactly why it’s killing off X integration on its consoles. However, it may be related to X shutting down its free API earlier this year, forcing developers and companies to pay if they want to hook into its services. Microsoft stopped letting users post Xbox clips directly to X in April, likely due to that move.

It’ll still be possible to post your PlayStation clips to X. If you have a PS5, you’ll be able to access your recent captures through the PS App and share them to X from your phone. PS4 owners (and PS5 users, if they prefer this approach) will need to use a USB drive to copy screenshots and clips to their computer. Alternatively, you can use one of the several other direct sharing options available on PS4 and PS5, such as YouTube.

This article originally appeared on Engadget at https://www.engadget.com/ps5-and-ps4-are-losing-x-sharing-options-on-november-13-204747608.html?src=rss 

GPT-4 Turbo is OpenAI’s most powerful large language model yet

During its first-ever developer conference on Monday, OpenAI previewed GPT-4 Turbo, a brand new version of the large language model that powers its flagship product, ChatGPT. The newest model is capable of accepting much longer inputs than previous versions — up to 300 pages of text, compared to the current limit of 50. This means that theoretically, prompts can be a lot longer and more complex, and responses might be more meaningful.

OpenAI has also updated the data that GPT-4 Turbo is trained on. The company claims that the newest model now has knowledge about the world until April 2023. The previous version was only caught up until September 2021, although recent updates to the non-Turbo GPT-4 did include the ability to browse the internet to get the latest information.

GPT-4 Turbo will also accept images as prompts directly in the chat box, wherein it can generate captions or provide a description of what the image depicts. It will also handle text-to-speech requests. And users will now be able to upload documents directly and ask the service to analyze them — a capability that other chatbots like Anthropic’s Claude have included for months.

For developers, using the newest model will effectively be three times cheaper. OpenAI said that it was slashing costs for input and output tokens — a unit used by large language models to understand instructions and respond with answers.

In addition to announcing its newest large language model, OpenAI revealed that ChatGPT now has more than 100 million weekly active users around the world and is used by more than 92 percent of Fortune 500 companies. The company also said that it would defend customers, including enterprises, not only against legal claims around copyright infringement that might arise as a result of using its products, but it would also pay for costs incurred as a result.

OpenAI also revealed single-application “mini-ChatGPTs” today, small tools that are focused on a single task that can be built without even knowing how to code. GPTs created by the community can be immediately shared, and OpenAI will open a “store” where verified builders can make their creation available to anyone. 

The company didn’t announce when GPT-4 Turbo would come out of preview and be available more generally. Accessing GPT-4 currently costs $20 a month.

This article originally appeared on Engadget at https://www.engadget.com/gpt-4-turbo-is-openais-most-powerful-large-language-model-yet-211956553.html?src=rss 

YouTube tests AI-generated comment summaries and a chatbot for videos

YouTube announced two new experimental generative AI features on Monday. YouTube Premium subscribers can soon try AI-generated comment summaries and a chatbot that answers your questions about what you’re watching. The features will be opt-in, so you won’t see them unless you’re a paid member who signs up for the experiments during their test periods.

The AI-powered summaries will organize comments into “easily digestible themes.” In a Mr. Beast video YouTube used as an example, the tool generated topics including “People love Bryan the bird,” “Lazarbeam should be in more videos,” “No submarine” and “More 7 day challenges.” You can tap on the topic to view the complete list of associated comments. The tool will only run “on a small number of videos in English” with large comment sections.

YouTube

If you’re worried about YouTube’s summaries spiraling out of control the way the platform’s comment sections often do, the company says it won’t pull content from unpublished messages, those held for review, any containing blocked words or those from blocked users. Further, creators can use the tool to delete individual comments if they see problematic (or otherwise unwanted) discussions about their videos.

Meanwhile, YouTube’s conversational AI tool gives you a chatbot trained on whichever video you’re watching. Generated by large language models (LLMs), the assistant lets you “dive in deeper” by asking questions about the content and fishing for related recommendations. The company says the AI tool, which appears similar to chatting with Bard, draws on info from YouTube and the web, providing answers without interrupting playback. Eligible users can find it under a new “Ask” button in the YouTube app for Android.

Starting today, YouTube Premium subscribers can opt into the comment summarizer on YouTube’s experiments page. However, the company says you won’t see the “Topics” option for all videos. In addition, the conversational AI tool is only available now “to a small number of people on a subset of videos,” but YouTube Premium subscribers with Android devices will be able to sign up to try it in the coming weeks. The company warns the experimental features “may not always get it right,” a description that can equally apply to Google’s other AI experiments.

This article originally appeared on Engadget at https://www.engadget.com/youtube-tests-ai-generated-comment-summaries-and-a-chatbot-for-videos-213405231.html?src=rss 

The Motorola Razr+ is $300 off in an early Black Friday deal

If you’re interested in a flip-style foldable phone, you effectively have two choices in the US: the Samsung Galaxy Z Flip 5 and the Motorola Razr+. We think the former is ultimately better for most people, but the latter is still a worthy alternative, and now it’s on sale for $700 at Amazon. That’s the lowest price we’ve seen for an unlocked model outside of trade-in deals. Motorola normally sells the Razr+ for $1,000, though we’ve seen the phone fall between $800 and $900 a couple of times since it arrived in June. This deal is applicable to the black, magenta and blue versions of the device.

We gave the Razr+ a score of 85 in our review. As with the Galaxy Z Flip 5, the Razr+’s biggest selling point is that you can fold it in half and make it easier to tuck away. The main display is a vibrant 6.9-inch OLED panel with a 165Hz refresh rate; fold it shut, and you can use a 3.6-inch OLED display around the back. One advantage the Razr+ has over Samsung’s foldable is that it can run most Android apps on that outer display with less fuss. (The Galaxy Z Flip 5 limits its cover screen to a handful of widgets by default, though can you enable wider app support through the device’s settings.) Not every app is optimized for such a tiny screen, but you can quickly fire off a text, reply to an email, pick a new Spotify playlist or do other phone things without having to actually open the device. 

Beyond that, the Razr+’s cover display has a higher refresh rate (144Hz versus 60Hz) and pixel density (413 ppi versus 306 ppi) than that of the Galaxy Z Flip 5, plus it’s 0.2 inches larger. It should last a little longer per charge, and its take on Android has more of a light touch than Samsung’s One UI interface. It also supports slightly faster wired charging speeds. 

That said, there are a few clear downsides. For one, we found the Razr’s camera performance to be a step behind the Galaxy Z Flip 5. The hardware has a meager IP52 water-resistance rating — which means it can withstand some light rain but little more — whereas Samsung’s phone has a more robust IPX8 rating. (Though you’ll want to be delicate with either phone, as all foldables carry a greater risk of durability issues.) While it’s not slow, it uses a year-old Snapdragon Galaxy 8+ Gen 1 chip, so its performance is a little less futureproof. And Motorola’s update policy is less robust: It promises three major OS updates and bi-monthly security updates for the Razr+, while Samsung promotes four years of OS updates and five years of monthly security updates for the Galaxy Z Flip 5.

In the end, the main reason to consider the Razr+ is the bigger and more functional cover display, so if you’re sold on the idea of a clamshell-style foldable, it’s worth considering at this price. Just note that we may see a deal on Samsung’s foldable as we get closer to Black Friday. One foldable we’re less bullish on, however, is Motorola’s midrange Razr: That one is also on sale for $500, but we found it to be too limited in our review.

Your Black Friday Shopping Guide: See all of Yahoo’s Black Friday coverage, here. Follow Engadget for Black Friday tech deals. Learn about Black Friday trends on In The Know. Hear from Autoblog’s experts on the best Black Friday deals for your car, garage, and home, and find Black Friday sales to shop on AOL, handpicked just for you.

This article originally appeared on Engadget at https://www.engadget.com/the-motorola-razr-is-300-off-in-an-early-black-friday-deal-201601542.html?src=rss 

GPTs are the single-application mini-ChatGPT models that anyone can create

It’s been nearly a year since ChatGPT’s public debut and its evolution since then has been nothing short of extraordinary. In just over 11 months, OpenAI’s chatbot has gained the ability to write programming code, process information between multiple modalities and expand its reach across the internet with APIs. During OpenAI’s 2023 Dev Day keynote address Monday, CEO Sam Altman and other executives took to the stage in San Francisco to unveil the chatbot’s latest iteration, ChatGPT-4 Turbo, as well as an exciting new way to bring generative AI technology to everybody, regardless of their coding capability: GPTs!

GPTs are small, task-specific iterations of ChatGPT. Think of them like the single-purpose apps and features on your phone but instead of them maintaining a timer or stop watch, or a digital assistant transcribing your voice instructions into a shopping list, GPTs will do, basically anything you train them to. OpenAI offers up eight examples of what GPT’s can be used for, anything from a digital kitchen assistant that suggests recipes based on whats in your pantry to a math mentor to help your kids through their homework to a Sticker Wiz that will, “turn your wildest dreams into die-cut stickers, shipped right to your door.”

The new GPTs are an expansion on the company’s existing Custom Instructions feature which debuted in July. OpenAI notes that many of its power users were already recycling and updating their most effective prompts and instruction sets, a process which GPT-4 Turbo will now handle automatically as part of its update to seed parameters and focus on reproducible outputs. This will allow users a far greater degree of control in customizing the GPTs to their specific needs.

What users won’t need is an extensive understanding of javascript programming. With GPT-4 Turbo’s improved code interpretation, retrieval and function calling capabilities, as well as its massively increased context window size, users will be able to devise and develop their GPTs using nothing but natural language.

Any GPT created by the community will be immediately shareable. For now that will happen directly between users but, later this month, OpenAI plans to launch a centralized storefront where “verified builders” can post and share their GPTs. The most popular ones will climb a leaderboard and potentially, eventually earn their creators money based on how many people are using the GPT.

GPTs will be available to both regular users and enterprise accounts which, like ChatGPT Enterprise that came out earlier this year, will offer institutional users the chance to create their own internal-only, admin-approved mini-chatbots. These will work with (and are trained on) the company’s specific tasks, department documentation or proprietary datasets. Enterprise GPTs arrive for those customers on Wednesday.

Privacy remains a focal point for the company with additional technical safeguards being put into place, atop existing moderation systems, to prevent people from making GPTs that go against OpenAI’s usage policies. The company is also rolling out an identity verification system for developers to help improve transparency and trust, but did not elaborate on what that process could entail.

This article originally appeared on Engadget at https://www.engadget.com/gpts-are-the-single-application-mini-chatgpt-models-that-anyone-can-create-203311858.html?src=rss 

Windows 11’s new AI features: How to use Paint, Clipchamp, Snipping Tool and Photos

Microsoft is injecting a ton of generative AI-powered features into Windows 11, but it’s not all about the Copilot assistant. The company has started to update a string of apps with new AI functions, including Paint, Clipchamp, Snipping Tool and Photos. Microsoft released an update for Windows 11 2023, known as 23H2, on October 31. That update expanded access to Copilot and other AI features. 

Microsoft is rolling out the AI updates gradually, so you may not have access to everything just yet. Still, it may be handy for you to know what you can do with the new tools. Here are some pointers on how to use the AI features in each app.

How to use Paint in Windows 11

An AI-infused version of Paint that includes generative AI features is rolling out to Windows 11 users. Microsoft Paint Cocreator taps into the DALL-E model to enable you to create images based on a text description. The feature will whip up just about anything you can think of (within reason).

It’s easy enough to get started with Cocreator, as long as you have access to it. To begin with, Cocreator is available in the US, UK, France, Australia, Canada, Italy and Germany. Only prompts in English are supported for now. At the outset, there’s a waitlist to use Cocreator. You can join this from the Cocreator side panel and you’ll receive an email to let you know when you can start using the feature.

You’ll need to sign into your Microsoft account to use Cocreator. That’s because the cloud-based service Cocreator runs on requires authentication and authorization. You also need to sign in to access credits; you’ll need these to generate images with DALL-E. When you join Cocreator, you’ll receive 50 credits with which you can create images. Each generated image costs one credit.

Microsoft

How to install Paint on Microsoft Windows 11

If you don’t already have Paint installed, you can download it from the Microsoft Store. Once you have it, open Paint and select the Cocreator icon on the toolbar. From there, you can type in a description of the image you’d like the AI to generate. Microsoft suggests being as descriptive as possible in order to get results that match your concept.

After entering the text, select a style that you’d like your image to be in. Then hit the Create button.

Cocreator will then generate three different images based on your text input and the style you chose. Simply click on one of these images to add it to the Paint canvas so you can start modifying it.

Meanwhile, Paint now supports background removal as well as layers. With the help of AI, you can isolate an item (such as an object or person) and remove the background with a single click. You can also edit individual layers without affecting the rest of the image.

How to use video auto composition with Clipchamp on Windows 11

It should be easier for you to stitch footage together in the video-editing tool Clipchamp. The app will help guide you with automated suggestions for the likes of scenes, edits and narratives. But it’s the auto compose feature that may prove most useful for many users. Auto compose is available on the web and in the Microsoft Clipchamp desktop app.

Microsoft says that the media you add to Clipchamp is not used to train AI models and all of the processing takes place in the app or browser. The app’s AI video editor (which Microsoft says is useful for everyone) can automatically generate slideshows, montage videos and short videos in 1080p based on the photos and videos you add to it.

If you don’t like the first video that Clipchamp offers up, you can check out a different version “instantly” since the app will generate multiple videos for you. Auto compose may also prove useful for professional video editors, Microsoft says, as the tool can generate several unique videos in the space of a few minutes.

Microsoft

After you sign into Clipchamp, click the “Create a video with AI” button. You’ll find this front and center on the main page. After you give your project a working title, you can upload media by clicking the “Click to add or drag and drop” button. Alternatively, you can simply drag and drop videos and photos into the media window.

Once you’ve finished adding everything, hit the “Get started” button. Now, it’s a case of letting the AI know what kind of style and aesthetic you’re looking for. Styles include things like elegant, vibrant and bold. You’ll use thumbs up and thumbs down buttons to inform the AI of your preferences. Alternatively, you can leave the decision up to Clipchamp by selecting the “Choose for me” option. When you’re ready to move onto the following step, click the Next button.

Microsoft

Clipchamp will suggest a length for your video based on what it believes are the best combinations of your media. You’ll be able to adjust the video length and the aspect ratio before moving on. Before you leave this screen, you can preview the video by clicking the play button.

Next up, you’ll be able to change the background music on the “Finish your video” screen if you’re not a fan of the track that the AI picked. Click the music button to change the tune. Again, you’ll be able to preview your video and audio track. If you’re not happy with the video, you can ask for a different take by clicking on “Create a new version.”

Microsoft

If you do like the video Clipchamp has created, you’re pretty much done at this point. Click the Export button to save the video. From the export page, you can share your video directly to the likes of YouTube and TikTok, or add a copy to your OneDrive storage.

After the AI is done with your video, you can further customize it in Clipchamp. Click on the “Edit in timeline” button and you’ll be able to do things like add stickers, captions, animated text and audio files.

In addition, you can enhance your video with AI options including a text-to-speech voiceover feature and automatically generated subtitles. The speaker coach tool aims to provide you with real-time feedback on your camera recordings to help improve your speaking skills and video presentations.

Many Clipchamp features are available for free. But for videos in 4K resolution and other premium tools, you’ll need to pay for the essentials plan, which costs $12 per month or $120 per year.

How to use Snipping Tool’s AI features

The Snipping Tool is one of the most useful in Windows 11. It’s a cinch to capture and share some or all of your display. The app’s AI functions should come in useful in a number of ways.

First, the app supports text recognition. If you use the Snipping Tool to take a screenshot of something with text in it, you can click the Text Actions button. At the outset, you’ll have two main options. You can copy all of the text and paste it into another app.

Tech Based/YouTube

Alternatively, you can quickly redact private information. The tool should be able to recognize email addresses and phone numbers, and you’ll be able to swiftly blue those out. That should save you having to manually cover up text in, say, Paint.

The Snipping Tool should work quite nicely with Copilot as well. As indicated in a Windows 11 promo video, you can paste something you’ve clipped with the tool into Copilot, then do things like ask the assistant to remove the background from the image.

How to use Background Blur in Windows 11’s Photos app

Microsoft

The Windows 11 Photos app has some useful AI features as well. Those include improved search for images stored on OneDrive accounts —- it should be easier for you to find a photo based on content or location where it was taken.

The app’s editing features have been enhanced thanks to AI as well. One of the handier and easiest-to-use tools is the self-explanatory Background Blur (Paint 3D has a similar feature). That can help the subject of your photo stand out. AI separates the background from the subject, but to ensure your data stays on your device, the separation process takes place there rather than in the cloud.

To use Background Blur, first select the image you want to use and open it in the Photos app. Click on “Edit image” at the top of the screen and select Background Blur. You’ll then have a few options to choose from. You can opt to enable the blur effect instantly; adjust the intensity of the blur before applying it; or have more granular control by turning on the “Selection brush tool.”

Opt for the Selection brush tool and you can manually add denote more parts of the image for the AI to blur out. Alternatively, you can deselect parts of the image that you don’t want to be blurred. You’ll be able to change the brush size for finer control and modify the brush softness to intensify or turn down the blue effect.

This article originally appeared on Engadget at https://www.engadget.com/windows-11s-new-ai-features-how-to-use-paint-clipchamp-snipping-tool-and-photos-191541014.html?src=rss 

Generated by Feedzy
Exit mobile version