Things are going from bad to worse for Cruise’s robotaxis

GM’s autonomous vehicle Cruise division is already going through a rough patch, with the California Department of Motor Vehicles (DMV) recently suspending its driverless permits over safety issues. Now, several new reports have highlighted other issues with the company, including problems with its autonomous vehicles (AVs) recognizing children and the frequency with which human operators must remotely take control. The company also just announced that it’s temporarily suspending production of its fully autonomous Origin transport.

The most concerning issue is that Cruise reportedly kept its vehicles on the streets even though it knew they had problems recognizing children, The Intercept reported. According to internal, previously unreported safety assessment materials, Cruises autonomous vehicles may have been unable to effectively detect children in order to take extra precautions. 

“Cruise AVs may not exercise additional care around children,” the document states. Because of that, the company was concerned that its robotaxis might drive too fast near children who could move unexpectedly into the street. Cruise also lacks data around child-specific situations, like kids separating from adults, falling, riding bicycles or wearing costumes. 

In one simulation, the company couldn’t rule out a scenario where a vehicle strikes a child. In another specific test drive, a vehicle detected a child-sized dummy but still struck it with a mirror at 28 MPH. The company chalked up the problems to a inadequate software and testing — specifically, it lacks AI software that could automatically detect child-shaped objects around the car and maneuver accordingly.

In a statement to The Intercept, Cruise admitted that its vehicles sometimes temporarily lost track of children by the side of the road during simulation testing. It added that the problem was fixed and only seen in testing and not on public streets, though it didn’t say what specific actions it took to resolve the issue. A spokesperson also said that the system hadn’t failed to detect the children, but did fail to classify them as such. 

It further stated that the odds of an accident involving children were relatively low. “We determined from observed performance on-road, the risk of the potential collision with a child could occur once every 300 million miles at fleet driving, which we have since improved upon. There have been no on-road collisions with children.”

The report also notes that Cruise AVs have trouble detecting large holes in the road, such as construction site pits with crews inside, something the company itself called a “major risk.” GM’s own documents indicated that even with its small AV fleet, a vehicle was likely to drive into such a hole at least once a year — and into a pit with people inside once every four years. 

That scenario almost happened, according to video reviewed by The Intercept. Onboard cameras show an AV driving right to the edge of a pit, inches away from workers, despite the presence of construction cones. It only stopped because someone waved a “slow” sign in front of the windshield. 

“Enhancing our AV’s ability to detect potential hazards around construction zones has been an area of focus, and over the last several years we have conducted extensive human-supervised testing and simulations resulting in continued improvements,” the company said in a statement. “These include enhanced cone detection, full avoidance of construction zones with digging or other complex operations, and immediate enablement of the AV’s Remote Assistance support/supervision by human observers.”

All of that raises the question of whether Cruise should be operating its vehicles on public roads. “If you can’t see kids, it’s very hard for you to accept that not being high risk — no matter how infrequent you think it’s going to happen,” Carnegie Mellon engineering professor Phil Koopman told The Intercept

The child detection issue isn’t the only recent exposé on Cruise, as it turns out that the robotaxis aren’t really autonomous at all. In fact, they require human assistance every four to five miles, according to a report in The New York Times confirmed in large part by Cruise CEO Kyle Vogt in Hacker News

“Cruise AVs are being remotely assisted (RA) 2-4 percent of the time on average, in complex urban environments.” wrote Vogt. That equates to someone intervening every four to five miles, which could be multiple times on many trips. There is typically one remote assistant “for every 15-20 driverless AVs,” Cruise stated later.

In a statement to CNBC, the company provided additional details: “Often times the AV proactively initiates these [remote assistance actions] before it is certain it will need help such as when the AV’s intended path is obstructed (e.g construction blockages or detours) or if it needs help identifying an object,” a spokesperson wrote. “Remote assistance is in session about 2-4 percent of the time the AV is on the road, which is minimal, and in those cases the RA advisor is providing wayfinding intel to the AV, not controlling it remotely.”

Finally, it appears that Cruise has halted production of its Origin autonomous vehicle after the California DMV pulled its license, Forbes reported. In an all-hands meeting with employees, Vogt, referring to the DMV license withdrawal, stated that “because a lot of this is in flux, we did make the decision with GM to pause production of the Origin,” according to audio from the meeting. 

Cruise is still operating its AVs in California, but now must have a human backup driver at the wheel. Meanwhile, California says it has given Cruise a path back to driverless operation “The DMV has provided Cruise with the steps needed to apply to reinstate its suspended permits, which the DMV will not approve until the company has fulfilled the requirements to the department’s satisfaction,” it said in a statement. 

This article originally appeared on Engadget at https://www.engadget.com/things-are-going-from-bad-to-worse-for-cruises-robotaxis-094529914.html?src=rss 

TikTok is discontinuing its Creator Fund and steering users to the Creativity Program

TikTok only launched its Creator Fund a few years ago, but is already killing it off in favor of a new monetization scheme that arrived earlier this year. “Starting December 16, 2023, the Creator Fund will be discontinued in the United States, United Kingdom, France, and Germany,” a spokesperson told Engadget in a statement. “All creators currently enrolled in the Creator Fund can upgrade to the Creativity Program.” 

The Creativity Program emphasizes longer content, with a required minimum video length of at least one minute (TikTok now allows videos up to 30 minutes long). The company said it wants to create “the best possible experience” on the platform with the new system, but longer videos also help TikTok sell more ads. The main benefit for streamers is that it pays up to 20 times the amount offered by the Creator Fund, according to the company. 

“We developed the Creativity Program based on the learnings and feedback from the Creator Fund, and we’ll continue listening and learning from our community as we explore new features and enhance existing ones to further enrich the TikTok experience,” TikTok said. The Creator Fund will continue to be available for users in Spain and Italy, at least for now.

The Creator Fund was unveiled in 2020 with an initial commitment of $200 million to be paid out to top streamers. Soon after, the company said it would support hundreds of thousands of creators with over $2 billion in funding over the next three years. 

However, it got off to a rough start after top users complained that they weren’t receiving very much money. Last year, streamer Hank Green shared that he made about 2.5 cents per 1,000 views on the platform — a fraction of his YouTube earnings and about half of what he earned on TikTok prior to the fund.

We designed the Creativity Program based on [creator] feedback, to encourage creators to create high-quality, original content, generate higher revenue potential, and open doors to more real-world opportunities. The program offers higher cash incentives, giving creators the potential to earn up to 20 times the amount previously offered by the Creator Fund.

The Creativity Program, by contrast, arrived in February this year as an invite-only system before opening up to all eligible creators. It’s still in beta, but any Creator Fund users can join, provided they’re at least 18 years old and have at least 10,000 followers and 100,000 video views in the last 30 days, along with a US-based account (or account in one of the other eligible countries). After switching to the Creativity Program, users are removed automatically from the Creator Fund.

Some creators have embraced the Creativity Program, according to a report from Insider. Streamers with subscriber numbers varying from a half million to several million have seen payouts ranging in the low thousands to nearly $100,000 per month, “a complete 180” from what they were seeing before, according to one creator. 

Streamers may like the longer format and extra revenue, but users may need to some to adjust. In a recent TikTok internal survey, nearly 50 percent of users said videos over a minute in length were “stressful,” and a third of users watched videos online at double speed, according to a Wired report from earlier this year.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-is-discontinuing-its-creator-fund-and-steering-users-to-the-creativity-program-091023327.html?src=rss 

Microsoft will let Xbox game makers use AI tools for story design and NPCs

Xbox has teamed up with a startup called Inworld AI to create a generative AI toolset that developers can use to create games. It’s a multi-year collaboration, which the Microsoft-owned brand says can “assist and empower creators in dialogue, story and quest design.” Specifically, the partners are looking to develop an “AI design copilot” that can turn prompts into detailed scripts, dialogue trees, quests and other game elements in the same way people can type ideas into generative AI chatbots and get detailed scripts in return. They’re also going to work on an “AI character runtime engine” that developers can plug into their actual games, allowing players to generate new stories, quests and dialogues as they go. 

On Inworld’s website, it says its technology can “craft characters with distinct personalities and contextual awareness that stay in-world.” Apparently, it can provide developers with a “fully integrated character engine for AI NPCs that goes beyond large language models (LLMs).” The image above was from the Droid Maker tool it developed in collaboration with Lucasfilm’s storytelling studio ILM Immersive when it was accepted into the Disney Accelerator program. As Kotaku notes, though, the company’s tech has yet to ship with a major game release, and it has mostly been used for mods. 

Developers are understandably wary about these upcoming tools. There are growing concerns among creatives about companies using their work to train generative AI without permission — a group of authors, including John Grisham and George R.R. Martin, even sued OpenAI, accusing the company of infringing on their copyright. And then, of course, there’s the ever-present worry that developers could decide to lay off writers and designers to cut costs. 

Xbox believes, however, that these tools can “help make it easier for developers to realize their visions, try new things, push the boundaries of gaming today and experiment to improve gameplay, player connection and more.” In the brand’s announcement, Haiyan Zhang, General Manager of Gaming AI, said: “We will collaborate and innovate with game creators inside Xbox studios as well as third-party studios as we develop the tools that meet their needs and inspire new possibilities for future games.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-will-let-xbox-game-makers-use-ai-tools-for-story-design-and-npcs-083027899.html?src=rss 

Final Cut Pro uses Apple’s latest chips to improve face and object tracking

Following the recent launch of the new M3-equipped MacBook Pros, Apple will soon be releasing an update for its Final Cut Pro to make further use of its own silicon. According to the company, its updated video editing suite will leverage a new machine learning model for improved results with object and face tracking. Additionally, H.264 and HEVC encoding will apparently be faster, thanks to enhanced simultaneous processing by Apple silicon’s media engines.

On the user experience side, the new Final Cut Pro comes with automatic timeline scrolling, as well as the option to simplify a selected group of overlapping connected clips into a single storyline, and the ability to combine connected clips with existing connected storylines. As for Final Cut Pro for iPad, users can take advantage of the new voiceover recording tool, added color-grading presets, new titles, general workflow improvements and stabilization tool in the pro camera mode. Both the Mac and iPad versions of Final Cut Pro will receive their updates later this month.

With Logic Pro’s new Quick Sampler Recorder mode, users can create sampler instruments from any sound using the iPad’s built-in microphone or a connected audio input.

Apple

For those who need to focus on music creation, Apple has also updated Logic Pro with some handy new tools. For both the Mac and iPad versions, there’s a new Mastering Assistant which claims to help polish your audio mix, by analyzing and tweaking “the dynamics, frequency balance, timbre, and loudness.” You can use this tool to refine your mix at any point throughout the creation process. Another good news is that to avoid digital clipping and to boost low-level sensitivity, both flavors of Logic Pro now supports 32-bit float recording when used with compatible audio interfaces.

If you’re a fan of “Sample Alchemy” — a sample-to-instrument tool — and “Beat Breaker” — an audio multi-effect plug-in — on Logic Pro for iPad, you’ll be pleased to know that both features have been ported over to Logic Pro for Mac. Similarly, the Mac app has gained two free sound packs, “Hybrid Textures” and “Vox Melodics,” which can be found in the Sound Library. Some may also find the new “Slip” and “Rotate” tools in the “Tool” menu handy.

Meanwhile, the updated Logic Pro for iPad offers a better multi-tasking experience. The app now supports iPadOS’ “Split View” and “Stage Manager,” thus letting you quickly drag and drop audio samples from another app — such as Voice Memos, Files or a browser — into Logic Pro. There’s also a new “Quick Sampler” recorder plug-in for easily creating sampler instruments from any sound, via the iPad’s built-in microphone or a connected audio input. This update, along with a handful of related in-app lessons, are available immediately.

This article originally appeared on Engadget at https://www.engadget.com/final-cut-pro-uses-apples-latest-chips-to-improve-face-and-object-tracking-065025314.html?src=rss 

Lucid EVs will be able to access Tesla’s Superchargers starting in 2025

Lucid’s electric vehicles will be able to plug into over 15,000 Tesla Superchargers in North America starting in 2025. The automaker is the latest entry in the growing list of companies pledging to support the North American Charging Standard (NACS), also known as the Tesla charging standard. Lucid will give customers access to a NACS adapter for its current vehicles, which are equipped with the Combined Charging System (CCS), in 2025. The company intends to start building NACS ports into its EVs within the same year, as well, so that newer models no longer need to use adapters.

Ford was the first automaker to announce this year that it was going to give its customers access to Superchargers after the White House convinced Tesla to share its charging network with vehicles from other companies. In the months after that, Mercedes, Volvo, Polestar, Honda, Toyota (and Lexus), BMW, Hyundai and Subaru revealed that they will also give their customers access to NACS adapters and will ultimately incorporate the standard into their vehicles over the next two years. 

As TechCrunch notes, Lucid vehicles use a 900-volt charging architecture, which became the basis of a Lucid Air promotion that called it the “fastest charging electric vehicle ever.” At the moment, most Superchargers are rated at around 500 volts, and that means charging times won’t be as fast as the company promises. That said, Tesla has started deploying V4 Superchargers that offer higher voltage charging in the US, and supporting NACS could convince potential customers in the region to purchase Lucid EVs. As company CEO Peter Rawlinson said, “[a]dopting NACS is an important next step to providing [its] customers with expanded access to reliable and convenient charging solutions for their Lucid vehicles.”

This article originally appeared on Engadget at https://www.engadget.com/lucid-evs-will-be-able-to-access-teslas-superchargers-starting-in-2025-055045292.html?src=rss 

WeWork files for Chapter 11 bankruptcy protection

There has been another twist in the WeWork saga as the office space rental company has filed for bankruptcy protection. Following reports last week that the company was expected to file for Chapter 11 protection, WeWork’s shares were halted on the New York Stock Exchange (NYSE) on Monday. According to The New York Times, it described its bankruptcy filing as a “comprehensive reorganization” of its business. “As part of today’s filing, WeWork is requesting the ability to reject the leases of certain locations, which are largely nonoperational, and all affected members have received advanced notice,” the company told the publication in a statement. 

A number of factors played into WeWork’s fall, including trying to grow too fast in its early days. The company has attempted to cut costs in recent years (including by closing several co-working spaces in the wake of COVID-19 lockdowns) while its revenue has grown. 

However, WeWork has been toiling in a real estate market that has felt the pinch of inflation and the rising costs of borrowing money. It has also been contending with another pandemic-accelerated change as millions more people are opting to work remotely instead of going to their company’s offices. In its most recent earnings report in August, WeWork said it had “substantial doubt” about its ability to remain operational.

WeWork first attempted to go public in 2019, though it withdrew plans for an initial public offering after investors expressed concerns over profitability and corporate governance. Its S-1 filing showed losses of over $900 million for the first half of 2019 and indicated that WeWork was on the hook for over $47 billion worth of lease payments — WeWork takes out long-term leases on office space and rents it to workers and companies on a short-term basis.

That fiasco led to Softbank, which at one point led an investment round into WeWork when it had a valuation of $47 billion, taking control of the company. Softbank pushed out co-founder and CEO Adam Neumann with an exit package that was said to be worth $445 million.

The business eventually went public in 2021 after it merged with a special-purpose acquisition company. WeWork shares cost more than $400 two years ago, but by Monday the price had dropped to under $1.

WeWork has made more attempts to steady the ship. In September, the company completed a reverse stock split. It said this was conducted to help it continue to comply with the $1 minimum share closing price required to stay listed on the NYSE.

Later that month, WeWork said it would try to renegotiate the vast majority of its leases. At the time, CEO David Tolley pointed out that the company’s lease liabilities amounted to over two-thirds of its operating income in the second quarter of this year.

On October 31, WeWork said it would withhold some interest payments — even though it had the cash to make them — in an attempt to improve its balance sheet. The company then entered a 30-day grace period before an event of default.

Meanwhile, Neumann has a new real estate venture, this time focused on residential rentals. It emerged last year that he had bought more than 3,000 apartments in Miami, Fort Lauderdale, Atlanta and Nashville. Flow, the company that will manage those properties, has reportedly received an investment of $350 million from venture capital firm Andreessen Horowitz.

This article originally appeared on Engadget at https://www.engadget.com/wework-files-for-chapter-11-bankruptcy-protection-030708470.html?src=rss 

Meta reportedly won’t make its AI advertising tools available to political marketers

Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts. At the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser’s video content. Reuters reports Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle. 

Meta’s decision to bar the use of generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company, “has not yet publicly disclosed the decision in any updates to its advertising standards.” TikTok and Snap both ban political ads on their networks, Google employs a “keyword blacklist” to prevent its generative AI advertising tools from straying into political speech and X (formerly Twitter) is, well, you’ve seen it

Meta does allow for a wide latitude of exceptions to this rule. The tool ban only extends to “misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire,” per Reuters. Those exceptions are currently under review by the company’s independent Oversight Board as part of a case in which Meta left up an “altered” video of President Biden because, the company argued, it was not generated by an AI.

Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntary commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. Those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and with the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated. 

This article originally appeared on Engadget at https://www.engadget.com/meta-reportedly-wont-make-its-ai-advertising-tools-available-to-political-marketers-010659679.html?src=rss 

PS5 and PS4 are losing X sharing options on November 13

PlayStation 5 and PlayStation 4 consoles will soon drop their X (formerly Twitter) integrations. As such, after November 13, you’ll no longer be able to post clips or screenshots directly to X from either system.

According to a notice Sony shared on its consoles (as noted by Wario64) and a support page, users will lose the ability to “post and view content, trophies and other gameplay-related activities on X directly from PS5/PS4 (or link an X account to do so).” Sony added the notice to its website at some point on Monday, according to a cached version of the support page.

Sony hasn’t revealed exactly why it’s killing off X integration on its consoles. However, it may be related to X shutting down its free API earlier this year, forcing developers and companies to pay if they want to hook into its services. Microsoft stopped letting users post Xbox clips directly to X in April, likely due to that move.

It’ll still be possible to post your PlayStation clips to X. If you have a PS5, you’ll be able to access your recent captures through the PS App and share them to X from your phone. PS4 owners (and PS5 users, if they prefer this approach) will need to use a USB drive to copy screenshots and clips to their computer. Alternatively, you can use one of the several other direct sharing options available on PS4 and PS5, such as YouTube.

This article originally appeared on Engadget at https://www.engadget.com/ps5-and-ps4-are-losing-x-sharing-options-on-november-13-204747608.html?src=rss 

GPT-4 Turbo is OpenAI’s most powerful large language model yet

During its first-ever developer conference on Monday, OpenAI previewed GPT-4 Turbo, a brand new version of the large language model that powers its flagship product, ChatGPT. The newest model is capable of accepting much longer inputs than previous versions — up to 300 pages of text, compared to the current limit of 50. This means that theoretically, prompts can be a lot longer and more complex, and responses might be more meaningful.

OpenAI has also updated the data that GPT-4 Turbo is trained on. The company claims that the newest model now has knowledge about the world until April 2023. The previous version was only caught up until September 2021, although recent updates to the non-Turbo GPT-4 did include the ability to browse the internet to get the latest information.

GPT-4 Turbo will also accept images as prompts directly in the chat box, wherein it can generate captions or provide a description of what the image depicts. It will also handle text-to-speech requests. And users will now be able to upload documents directly and ask the service to analyze them — a capability that other chatbots like Anthropic’s Claude have included for months.

For developers, using the newest model will effectively be three times cheaper. OpenAI said that it was slashing costs for input and output tokens — a unit used by large language models to understand instructions and respond with answers.

In addition to announcing its newest large language model, OpenAI revealed that ChatGPT now has more than 100 million weekly active users around the world and is used by more than 92 percent of Fortune 500 companies. The company also said that it would defend customers, including enterprises, not only against legal claims around copyright infringement that might arise as a result of using its products, but it would also pay for costs incurred as a result.

OpenAI also revealed single-application “mini-ChatGPTs” today, small tools that are focused on a single task that can be built without even knowing how to code. GPTs created by the community can be immediately shared, and OpenAI will open a “store” where verified builders can make their creation available to anyone. 

The company didn’t announce when GPT-4 Turbo would come out of preview and be available more generally. Accessing GPT-4 currently costs $20 a month.

This article originally appeared on Engadget at https://www.engadget.com/gpt-4-turbo-is-openais-most-powerful-large-language-model-yet-211956553.html?src=rss 

YouTube tests AI-generated comment summaries and a chatbot for videos

YouTube announced two new experimental generative AI features on Monday. YouTube Premium subscribers can soon try AI-generated comment summaries and a chatbot that answers your questions about what you’re watching. The features will be opt-in, so you won’t see them unless you’re a paid member who signs up for the experiments during their test periods.

The AI-powered summaries will organize comments into “easily digestible themes.” In a Mr. Beast video YouTube used as an example, the tool generated topics including “People love Bryan the bird,” “Lazarbeam should be in more videos,” “No submarine” and “More 7 day challenges.” You can tap on the topic to view the complete list of associated comments. The tool will only run “on a small number of videos in English” with large comment sections.

YouTube

If you’re worried about YouTube’s summaries spiraling out of control the way the platform’s comment sections often do, the company says it won’t pull content from unpublished messages, those held for review, any containing blocked words or those from blocked users. Further, creators can use the tool to delete individual comments if they see problematic (or otherwise unwanted) discussions about their videos.

Meanwhile, YouTube’s conversational AI tool gives you a chatbot trained on whichever video you’re watching. Generated by large language models (LLMs), the assistant lets you “dive in deeper” by asking questions about the content and fishing for related recommendations. The company says the AI tool, which appears similar to chatting with Bard, draws on info from YouTube and the web, providing answers without interrupting playback. Eligible users can find it under a new “Ask” button in the YouTube app for Android.

Starting today, YouTube Premium subscribers can opt into the comment summarizer on YouTube’s experiments page. However, the company says you won’t see the “Topics” option for all videos. In addition, the conversational AI tool is only available now “to a small number of people on a subset of videos,” but YouTube Premium subscribers with Android devices will be able to sign up to try it in the coming weeks. The company warns the experimental features “may not always get it right,” a description that can equally apply to Google’s other AI experiments.

This article originally appeared on Engadget at https://www.engadget.com/youtube-tests-ai-generated-comment-summaries-and-a-chatbot-for-videos-213405231.html?src=rss 

Generated by Feedzy
Exit mobile version