Tile Bluetooth trackers are up to 33 percent off right now

Amazon is selling a two-pack of Tile Mate Bluetooth trackers for $33, which matches the record low the set hit for Prime Day in October. A four-pack of Tile Mates in grey is on sale for 40 percent off from the manufacturer. These handy fobs attach to your keys, backpack, or anything else you don’t want to lose. The app —which works with both iPhone and Android — lets you ring the tags to find them nearby, and uses the community of other Tile users to locate items that you misplace out in the world.

The network isn’t as large as Apple’s Find My, but in our tests, it only took around 10 minutes before the Tile was spotted. If you use an Android phone that’s not a Samsung, it’s likely your best option if you want to have a large finding network. One point to note is that you’ll need a Tile subscription (currently $30 or $100 per year) to enable left-behind alerts. If all you need is a tracker that will ring loudly and tell you when you left the house (or restaurant) without your keys, you could go with our top tracker pick, the Chipolo, which is 20 percent off for a four-pack right now.

One nice thing about Tile items is they come in different forms, like the smaller Tile Sticker. That one is 33 percent off, making it just $20. The disc comes with a strong adhesive so you can stick it to smaller stuff that you’re apt to misplace, like remotes, headphones and console controllers.

Our top pick for Androids in our Bluetooth tracker guide is the Tile Pro. It’s down to $25, which is just $2 more than the record low it hit for Prime Day in July. It’s the only Tile with a replaceable battery and we found it to be louder than an AirTag and any of the other Tiles. It’s got a larger key-fob shape and was typically quicker to connect to our phone in our tests than the Mate. 

Any tracker opens up the possibility of stalking, using it to track a person without their consent or knowledge. Tile trackers even offer an anti-theft mode that makes it harder to disable an unknown tracker that’s moving with you. But to enable the feature, the tracker owner must submit ID verification and acknowledge that misuse will result in a $1 million fine

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/tile-bluetooth-trackers-are-up-to-33-percent-off-right-now-170445150.html?src=rss 

Looking back at 25 years of the ISS

Wednesday marks the 25th anniversary of the International Space Station’s (ISS) physical assembly in orbit. On December 6, 1998, the crew aboard the space shuttle Endeavor attached the US-built Unity node to the Russian-built Zarya module, kicking off the modular construction of the ISS. A quarter century later, we look back at the milestones and breakthroughs from one of humanity’s most impressive marvels of engineering and international cooperation.

The ISS, which orbits the Earth 16 times every 24 hours at a speed of five miles per second, has been inhabited by researchers for over 23 years. It’s the product of five space agencies from 15 countries. NASA, Roscosmos (Russia’s national space agency), ESA (European Space Agency), JAXA (Japan Aerospace Exploration Agency) and CSA (Canadian Space Agency) have contributed to the station’s assembly and operation.

NASA 25th-anniversary event

First, NASA will hold a live-streamed event on Wednesday to mark the quarter-century anniversary of the Zarya and Unity modules linking up. All seven STS-88 Space Shuttle Mission crew members will join NASA Associate Administrator Bob Cabana (mission commander) and ISS Program Manager Joel Montalbano to discuss the milestone.

You can watch it here at 12:55PM ET on Wednesday:

From ink to orbit

Its official journey began in the early 1990s when the United States’ Freedom (ordered by President Ronald Reagan in 1984) and Russia’s Mir-2 space station projects were in danger of (literally) never getting off the ground. Freedom was in jeopardy primarily due to a lack of Congressional funding amid rising costs, while Mir-2 was on the brink partially because of financial hardships following the collapse of the Soviet Union.

On September 2, 1993, the two nations, each needing an international ally to forge ahead, signed an agreement to combine their programs and collaborate on a joint mission that would have seemed wildly implausible a few years earlier. US Vice President Al Gore and Russian Prime Minister Viktor Chernomyrdin inked the pact, marking the formal conception of the cosmic laboratory we know today as the ISS.

US Vice President Al Gore (left) and Russian Prime Minister Viktor Chernomyrdin in 1993

VITALY ARMAND via Getty Images

The following years included a design overhaul to fold Russian technology into America’s existing Freedom plans, a milestone 1995 docking of NASA’s Atlantis to Russia’s Mir station (epitomizing the fruit of the once-far-fetched collaboration), the addition of funding and cooperation from Europe, Canada and Japan in 1996 and Russia’s launch of Zarya a month before the ISS assembly began. That all led to the day 25 years ago when the two nations’ space tech linked together, sounding the death knell for the Cold War-era space race.

The first crewed mission began on November 2, 2000, when NASA astronaut Bill Shepherd and cosmonauts Yuri Gidzenko and Sergei Krikalev stepped onboard. The inaugural crew spent four months in space, laying the groundwork for subsequent crews. (The record for the most time living and working in space was set by Peggy Whitson, who celebrated 665 days aboard the ISS in 2017.)

NASA

The US Lab Module linked to the station in February 2001, expanding the station’s onboard living space by 41 percent. Four years later, Congress named the US portion a national laboratory. Far more than a symbolic gesture (although it was also that), the designation opened the door to funding and research from a much more comprehensive array of institutions, including universities, other government agencies and private businesses. In 2008, laboratories from Europe and Japan joined the ISS.

The ISS’s construction and expansion from 1998 to 2010 amassed around 900,000 pounds of modules. The station contains about $100 billion worth of gear spinning around the globe.

Research and breakthroughs

NASA

During the ISS’s more than 100,000 orbits of the Earth, it has ushered breakthroughs in areas ranging from disease research to bodily changes from microgravity.

Studying how proteins, cells and biological processes behave in microgravity has boosted research in Alzheimer’s, Parkinson’s, heart disease and asthma. Many of these studies wouldn’t have been possible on Earth. Meanwhile, protein crystal growth experiments have sparked advances in developing treatments for conditions including cancer, gum disease and Duchenne Muscular Dystrophy.

ISS researchers made surprising discoveries about “cool flames,” which can burn at extremely low temperatures. Nearly impossible to study outside of microgravity, the astronauts’ research has challenged our previous understanding of combustion. It may open new frontiers with internal combustion engines (ICE), allowing them to run cleaner and more efficiently.

Studies aboard the space station have contributed significantly to our knowledge of human muscle atrophy and bone loss. (ISS astronauts typically work out at least two hours daily to prevent these conditions.) Studying how prolonged time in microgravity affects muscle deterioration and recovery also applies to Earthbound patients stuck in bed for extended periods. In addition, the research can help us learn more about conditions like osteoporosis, leading to improved preventative measures and treatments. It has also helped scientists better understand broader biological changes in microgravity, which could pay dividends if or when humans colonize Mars.

Water purification systems designed to sustain astronauts over long periods have also borne fruit on Earth. ISS astronauts recycle 98 percent of their pee and sweat using highly efficient and compact systems. This has led to the technology’s use in agriculture, disaster relief and aid provision for less developed areas.

ISS astronauts studied the Bose-Einstein Condensate (BEC), a “fifth state of matter” that deviates significantly from known states like solids, liquids, gases and plasmas. In 2018, the ISS’s Cold Atom Lab produced BEC in orbit for the first time. Space’s colder temperatures and lack of gravity allow for longer observation times, helping researchers learn more about the behaviors of atoms and BECs. Not only is this crucial to quantum physics studies, it could aid in developing more advanced quantum technologies down the road.

For more detail on the ISS’s breakthroughs, NASA has a dedicated writeup from 2020.

Decommissioning

NASA

The ISS is currently scheduled for decommissioning in January 2031. (Russia currently plans to leave in 2028.) Its late 90s infrastructure is aging quickly, and the space station would grow increasingly and prohibitively expensive to maintain over the long haul. Government and commercial orbital labs will likely pick up the slack in the following years.

When its time comes, the ISS will undergo a controlled deorbit. As for what that might involve, Kirk Shireman, deputy manager of NASA’s space station program, broached the subject with Space.com in 2011. “We’ve done a lot of studies,” he said. “We have found an orbit and a change in velocity that we believe is achievable, and it creates a debris footprint that’s all in water in an unpopulated area.”

As Engadget’s Andrew Tarantola wrote about the ISS’s pending demise:

Beginning about a year before the planned decommissioning date, NASA will allow the ISS to begin degrading from its normal 240-mile high orbit and send up an uncrewed space vehicle (USV) to dock with the station and help propel it back Earthward. The ultimate crew from the ISS will evacuate just before the station hits an altitude of 115 miles, at which point the attached USV will fire its rockets in a series of deorbital burns to set the station into a capture trajectory over the Pacific Ocean.

NASA plans to guide any remaining bits into a remote area of the South Pacific Ocean. “We’ve been working on plans and update the plans periodically,” Shireman said. “We don’t want to ever be in a position where we couldn’t safely deorbit the station. It’s been a part of the program from the very beginning.”

This article originally appeared on Engadget at https://www.engadget.com/looking-back-at-25-years-of-the-iss-173155049.html?src=rss 

Apple and Google are probably spying on your push notifications

Foreign governments likely spy on your smart phone usage, and now Senator Ron Wyden’s office is pushing for Apple and Google to reveal how exactly it works. Push notifications, the dings you get from apps calling your attention back to your phone, may be handed over from a company to government services if asked. But it appears the Department of Justice won’t let companies come clean about the practice. 

Push notifications don’t actually come straight from the app. Instead, they pass through the smart phone provider, like Apple for iPhones or Google for Androids, to deliver the notifications to your screen. This has created murky room for government surveillance. “Because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information,” Wyden wrote in the letter on Wednesday.

Apple claims it was suppressed from coming clean about this process, which is why Wyden’s letter specifically targets the Department of Justice. “In this case, the federal government prohibited us from sharing any information and now that this method has become public we are updating our transparency reporting to detail these kinds of request,” Apple said in a statement to Engadget. Apple’s next transparency report will include requests for push notification tokens, according to the company. Specifically, Wyden asks the DOJ to let Apple and Google tell customers and the general public about the demand for these app notification records. “We were the first major company to publish a public transparency report sharing the number and types of government requests for user data we receive, including the requests referred to by Senator Wyden. We share the Senator’s commitment to keeping users informed about these requests,” Google said in a statement.

It’s even more complicated because apps can’t do much about it. Even if there’s an individual pledge for security, if an app delivers push notifications, it must use the Apple or Google system to do so. In theory, this means your private messaging could be shared with a foreign government if you’re getting push notifications from the app. That includes any metadata about the notification, too, like account information.

The revelation about push notifications come at a time when privacy and security have become a selling point. Companies advertise how they’ll keep your information safe, but as more loopholes come to light, it’s becoming harder to suss out what’s actually trustworthy. 

This article originally appeared on Engadget at https://www.engadget.com/apple-and-google-are-probably-spying-on-your-push-notifications-154543184.html?src=rss 

The first affordable headphones with MEMS drivers don’t disappoint

The headphone industry isn’t known for its rapid evolution. There are developments like spatial sound and steady advances in Bluetooth audio fidelity, but for the most part, the industry counts advances in decades rather than years. That makes the arrival of the Aurvana Ace headphones — the first wireless buds with MEMS drivers — quite the rare event. I recently wrote about what exactly MEMS technology is and why it matters, but Creative is the first consumer brand to sell a product that uses it.

Creative unveiled two models, the Aurvana Ace ($130) and the Aurvana Ace 2 ($150) in tandem. Both feature MEMS drivers, the main difference is that the Ace model supports high-resolution aptX Adaptive while the Ace 2 has top-of-the-line aptX Lossless (sometimes marketed as “CD quality”). The Ace 2 is the model we’ll be referring to from here on.

In fairness to Creative, just the inclusion of MEMS drivers alone would be a unique selling point, but the aforementioned aptX support adds another layer of HiFi credentials to the mix. Then there’s adaptive ANC and other details like wireless charging that give the Ace 2 a strong spec-sheet for the price. Some obvious omissions include small quality of life features like pausing playback if you remove a bud and audio personalization. Those could have been two easy wins that would make both models fairly hard to beat for the price in terms of features if nothing else.

Photo by James Trew / Engadget

When I tested the first ever xMEMS-powered in-ear monitors, the Singularity Oni, the extra detail in the high end was instantly obvious, especially in genres like metal and drum & bass. The lower frequencies were more of a challenge, with xMEMS, the company behind the drivers in both the Oni and the Aurvana, conceding that a hybrid setup with a conventional bass driver might be the preferred option until its own speakers can handle more bass. That’s exactly what we have here in the Aurvana Ace 2.

The key difference between the Aurvana Ace 2 and the Oni though is more important than a good low end thump (if that’s even possible). MEMS-based headphones need a small amount of “bias” power to work, this doesn’t impact battery life, but Singularity used a dedicated DAC with a specific xMEMS “mode.” Creative uses a specific amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. The popularity of true wireless (TWS) headphones these days means that if MEMS is to catch on, it has to be compatible.

The good news is that even without the expensive iFi DAC that the Singularity Oni IEMs required to work, the Aurvana Ace 2 bring extra clarity in the higher frequencies than rival products at this price. That’s to say, even with improved bass, the MEMS drivers clearly favor the mid- to high-end frequencies. The result is a sound that strikes a good balance between detail and body.

Listening to “Master of Puppets” the iconic chords had better presence and “crunch” than on a $250 pair of on-ear headphones I tried. Likewise, the aggressive snares in System of a Down’s “Chop Suey!” pop right through just as you’d hope. When I listened to the same song on the $200 Grell Audio TWS/1 with personalized audio activated the sounds were actually comparable. Just Creative’s sounded like that out of the box, but the Grell buds have slightly better dynamic range over all and more emphasis on the vocals.

For more electronic genres the Aurvana Ace’s hybrid setup really comes into play. Listening to Dead Prez’s “Hip-Hop” really shows off the bass capabilities, with more oomph here than both the Grell and a pair of $160 House of Marley Redemption 2 ANC — but it never felt overdone or fuzzy/loose.

Photo by James Trew / Engadget

Despite besting other headphones on specific like-for-like comparisons, as a whole the nuances and differences between the headphones is harder to quantify. The only set I tested that sounded consistently better, to me, was the Denon Perl Pro (formerly known as the NuraTrue Pro) but at $349 those are also the most expensive.

It would be remiss of me not to point out that there were also many songs and tests where differences between the various sets of earbuds were much harder to discern. With two iPhones, one Spotify account and a lot of swapping between headphones during the same song it’s possible to tease out small preferences between different sets, but the form factor, consumer preference and price point dictate that, to some extent, they all broadly overlap sonically.

The promise of MEMS drivers isn’t just about fidelity though. The claim is that the lack of moving parts and their semiconductor-like fabrication process ensures a higher level of consistency with less need for calibration and tuning. The end result being a more reliable production process which should mean lower cost. In turn this could translate into better value for money or at least a potentially more durable product. If the companies choose to pass that saving on of course.

For now, we’ll have to wait and see if other companies explore using MEMS drivers in their own products or whether it might remain an alternative option alongside technology like planar magnetic drivers and electrostatic headphones as specialist options for enthusiasts. One thing’s for sure: Creative’s Aurvana Ace series offers a great audio experience alongside premium features like wireless charging and aptX Lossless for a reasonable price — what’s not to like about that?

This article originally appeared on Engadget at https://www.engadget.com/the-first-affordable-headphones-with-mems-drivers-review-161536317.html?src=rss 

Twitch to cease operations in South Korea over ‘prohibitively expensive’ network fees

Twitch is leaving South Korea, with plans to cease all operations on February 27. This is due to ‘prohibitively expensive’ networking fees, according to CEO Dan Clancy. The news is a major bummer, as the country is one of the largest esports markets in the world, with some of the most competitive League of Legends and Starcraft players around.

Clancy calls this a “unique situation,” noting that operating in South Korea ends up being ten times more expensive than other countries. He went on to write that Twitch undertook a “significant effort” to continue operations, but the Amazon-owned company simply couldn’t afford it.

Some of these efforts included incorporating a lower-cost peer-to-peer model and downgrading the resolution of streams to 720p, according to TechCrunch. The company had been running at a significant loss and it decided to, well, stop doing that. 

“I want to reiterate that this was a very difficult decision and one we are very disappointed we had to make. Korea has always and will continue to play a special role in the international esports community and we are incredibly grateful for the communities they built on Twitch,” wrote Clancy.

Netflix has also been open about its struggles to continue operations in South Korea. The streaming giant and local internet service provider SK Broadband had been tossing lawsuits back and forth regarding networking fees before settling back in September. As usual, consumers got the shaft on this one, as Netflix ended up raising prices by around 13 percent.

So what’s the issue exactly? It all boils down to a particular type of internet traffic tax employed in South Korea called the “Sending Party Network Pays” (SPNP) model. This tax requires the tech company, Twitch in this case, to pay a fee to the ISP for traffic to be delivered to the end user. Foreign companies resisted these efforts for years but there have been recent crackdowns, and here we are.

South Korea is the first country to force the SPNP model, but other nations are looking to follow suit. India, for instance, has expressed interest in changing up its telecom rules in favor of ISPs and the EU has been debating the issue since March. As for Twitch, the company’s hosting a live stream today to address concerns from Korean users.

This article originally appeared on Engadget at https://www.engadget.com/twitch-to-cease-operations-in-south-korea-over-prohibitively-expensive-network-fees-163041382.html?src=rss 

AI joins you in the DJ booth with Algoriddim’s djay Pro 5

Algoriddim’s djay Pro software has always had close ties to Apple and often been at the forefront of new DJ tech, especially on Mac, iOS or iPadOS. Today marks the launch of djay Pro version 5 and it includes a variety of novel features, many of which leverage the company’s AI and a new partnership with the interactive team at AudioShake.

There are several buzzy trademarked names to remember this time around including Next-generation Neural Mix, Crossfader Fusion and Fluid Beatgrid. These are the major points of interest in djay Pro 5, with only a passing mention of improved stem separation on mobile, UI refreshes for the library and a new simplified Starter Mode that may cater to new users on the platform. The updates include some intriguing AI-automated features that put the system in control of more complex maneuvers. Best of all, existing users get it all for free as part of their subscription.

AudioShake and Algroiddim have been working on their audio separation tech (like many other companies) and are calling this refreshed version Next-generation Neural Mix. We’re told to expect crisp, clear separation of elements from vocals, harmonies and drums. The tools have also been optimized for mobile devices, as long as they run a supported OS.

Fluid Beatbrid is perhaps one of the easiest to understand and seems to be an underlying part of the crossfader updates. Anyone who’s used beatgrids knows they’re rarely perfect on first analysis and often take a bit of work to lock in, especially on tracks that need it. Songs with live instrumentation that tend to shift tempo naturally, EDM with varying tempo shifts during breakdowns and even just older dance tracks that tend to meander slightly throughout playback have been pain points. Fluid Beatgrid is supposed to use AI to accommodate for those shifts and find the right points to mark.

Crossfader Fusion is where stems, automation and those beatgrids all come into play. There are now a variety of settings for the crossfader beyond the usual curves. One of the highlighted modes is the Neural Mix (Harmonic Sustain) setting. This utilizes stem separation and automated level adjustments as you go from one track to the next.

For those who enjoy cutting and scratching, there are crossfade settings that use automated curves and spatial effects so, for example, outgoing track vocals can be dropping out as you cut into the next track automatically. The incoming track’s vocals can be highlighted for scratching and as your mix completes the transition, things are blended together further with AI.

There’s even an example provided that shows how you can mix across vastly different BPMs, where the incoming song matches up with a slower outgoing track, but its original tempo is slowly integrated during the transition leaving you with the new faster tempo. 

Existing users should be alerted to the update, but newcomers can find djay Pro version 5 starting today at the App Store. While there will continue to be a free version, the optional Pro subscription costs $7 per month or $50 per year and gives you access to all the features across Mac, iOS and iPhone. Support for the app includes devices running MacOS 10.15 or later and iOS 15 / iPadOS 15 or later.

And as a side note, we’re told that djay Pro for Windows users were leveled up in September and will get Fluid Beatgrid in an update for that platform as soon as next week. Newer features like Crossfader Fusion are expected in the near future.

This article originally appeared on Engadget at https://www.engadget.com/ai-joins-you-in-the-dj-booth-with-algoriddims-djay-pro-5-150007224.html?src=rss 

Google’s Gemini AI is coming to Android

Google is bringing Gemini, the new large language model it just introduced, to Android, beginning with the Pixel 8 Pro. The company’s flagship smartphone will run Gemini Nano, a version of the model built specifically to run locally on smaller devices, Google announced in a blog post. The Pixel 8 Pro is powered by the Google Tensor G3 chip designed to speed up AI performance.

This lets the Pixel 8 Pro add several smarts to existing features. The phone’s Recorder app, for instance, has a Summarize feature that currently needs a network connection to give you a summary of recorded conversations, interviews, and presentations. But thanks to Gemini Nano, the phone will now be able to provide a summary without needing a connection at all.

Gemini smarts will also power Gboard’s Smart Reply feature. Gboard will suggest high-quality responses to messages and be aware of context in conversations. The feature is currently available as a developer preview and needs to be enabled in settings. However, it only works with WhatsApp currently and will come to more apps next year.

“Gemini Nano running on Pixel 8 Pro offers several advantages by design, helping prevent sensitive data from leaving the phone, as well as offering the ability to use features without a network connection,” wrote Brian Rakowski, Google Pixel’s vice president of product management.

As part of today’s AI push, Google is upgrading Bard, the company’s ChatGPT rival, with Gemini as well, so you should see significant improvements when using the Pixel’s Assistant with Bard experience. Google is also rolling out a handful of AI-powered productivity and customization updates on other Pixel devices, including the Pixel Tablet and the Pixel Watch, although it isn’t immediately clear what they are.

Google

Gemini Nano is the smallest version of Google’s large language model, while Gemini Pro is a larger model that will power not just Bard but other Google services like Search, Ads and Chrome, among others. Gemini Ultra, Google’s beefiest model, will arrive in 2024 and will be used to further AI development.

Although today’s updates are focused on the Pixel 8 Pro, Google spoke today about AI Core, an Android 14 service that allows developers to access AI features like Nano. Google says AI Core is designed run on “new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon.” The company adds that “additional devices and silicon partners will be announced in the coming months.”

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-ai-is-coming-to-android-150025984.html?src=rss 

Google announces new AI processing chips and a cloud ‘hypercomputer’

Undoubtedly, 2023 has been the year of generative AI, and Google is marking its end with even more AI developments. The company has announced the creation of its most powerful TPU (formally known as Tensor Processing Units) yet, Cloud TPU v5p, and an AI Hypercomputer from Google Cloud. “The growth in [generative] AI models — with a tenfold increase in parameters annually over the past five years — brings heightened requirements for training, tuning, and inference,” Amin Vahdat, Google’s Engineering Fellow and Vice President for the Machine Leaning, Systems, and Cloud AI team, said in a release.

The Cloud TPU v5p is an AI accelerator, training and serving models. Google designed Cloud TPUs to work with models that are large, have long training periods, are mostly made of matrix computations and have no custom operations inside its main training loop, such as TensorFlow or JAX. Each TPU v5p pod brings 8,960 chips when using Google’s highest-bandwidth inter-chip interconnect.

The Cloud TPU v5p follows previous iterations like the v5e and v4. According to Google, the TPU v5p has two times greater FLOPs and is four times more scalable when considering FLOPS per pod than the TPU v4. It can also train LLM models 2.8 times faster and embed dense models 1.9 times faster than the TPU v4. 

Then there’s the new AI Hypercomputer, which includes an integrated system with open software, performance-optimized hardware, machine learning frameworks, and flexible consumption models. The idea is that this amalgamation will improve productivity and efficiency compared to if each piece was looked at separately. The AI Hypercomputer’s performance-optimized hardware utilizes Google’s Jupiter data center network technology.

In a change of pace, Google provides open software to developers with “extensive support” for machine learning frameworks such as JAX, PyTorch and TensorFlow. This announcement comes on the heels of Meta and IBM’s launch of the AI Alliance, which prioritizes open sourcing (and Google is notably not involved in). The AI Hypercomputer also introduces two models, Flex Start Mode and Calendar Mode. 

Google shared the news alongside the introduction of Gemini, a new AI model that the company calls its “largest and most capable,” and its rollout to Bard and the Pixel 8 Pro. It will come in three sizes: Gemini Pro, Gemini Ultra and Gemini Nano. 

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-ai-processing-chips-and-a-cloud-hypercomputer-150031454.html?src=rss 

Google’s answer to GPT-4 is Gemini: ‘the most capable model we’ve ever built’

OpenAI’s spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It’s the first of “a new generation of AI models, inspired by the way people understand and interact with the world,” CEO Sundar Pichai wrote in a Google blog post.

“Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I’ve always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways,” Pichai continued.

The result of extensive collaboration between Google’s DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. “Its capabilities are state-of-the-art in nearly every domain,” Pichai declared. 

The system has been developed from the ground up as an integrated multimodal AI. Many foundational models can be essentially though of groups of smaller models all stacked in a trench coat, with each individual model trained to perform its specific function as a part of the larger whole. That’s all well and good for shallow functions like describing images but not so much for complex reasoning tasks.

Google, conversely, pre-trained and fine-tuned Gemini, “from the start on different modalities” allowing it to “seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models,” Pichai said. Being able to take in all these forms of data at once should help Gemini provide better responses on more challenging subjects, like physics.

Gemini can code as well. It’s reportedly proficient in popular programming languages including Python, Java, C++ and Go. Google has even leveraged a specialized version of Gemini to create AlphaCode 2, a successor to last year’s competition-winning generativeAI. According to the company, AlphaCode 2 solved twice as many challenge questions as its predecessor did, which would put its performance above an estimated 85 percent of the previous competition’s participants.

While Google did not immediately share the number of parameters that Gemini can utilize, the company did tout the model’s operational flexibility and ability to work in form factors from large data centers to local mobile devices. To accomplish this transformational feat, Gemini is being made available in three sizes: Nano, Pro and Ultra. 

Nano, unsurprisingly, is the smallest of the trio and designed primarily for on-device tasks. Pro is the next step up, a more versatile offering than Nano, and will soon be getting integrated into many of Google’s existing products, including Bard.

Starting Wednesday, Bard will begin using a especially-tuned version of Pro that Google promises will offer “more advanced reasoning, planning, understanding and more.” The improved Bard chatbot will be available in the same 170 countries and territories that regular Bard currently is, and the company reportedly plans to expand the new version’s availability as we move through 2024. Next year, with the arrival of Gemini Ultra, Google will also introduce Bard Advanced, an even beefier AI with added features.

Pro’s capabilities will also be accessible via API calls through Google AI Studio or Google Cloud Vertex AI. Search (specifically SGE), Ads, Chrome and Duet AI will also see Gemini functionality integrated into their features in the coming months.

Gemini Ultra won’t be available until at least 2024, as it reportedly requires additional red-team testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for testing and feedback.” But when it does arrive, Ultra promises to be an incredibly powerful for further AI development.

This article originally appeared on Engadget at https://www.engadget.com/googles-answer-to-gpt-4-is-gemini-the-most-capable-model-weve-ever-built-150039571.html?src=rss 

GTA 6, The Game Awards and the great indie debate | This week’s gaming news

After a slow month in the world of video game marketing, things are starting to pick up. The past week has given us a first look at the new Fallout TV show, a few release dates and a trailer for a little game called Grand Theft Auto VI — and the Game Awards are still to come. What good timing for us to launch a weekly video game show to dig into the news.

This week’s stories

The Game Awards

The Game Awards

The Game Awards will go live on Thursday, December 7, at 7:30PM ET. Expect a few hours of game announcements, new trailers, awkward interviews and musical performances, including one by the fictional band from Alan Wake 2.

Amazon MGM Studios

Fallout, but on TV!

Amazon dropped the first trailer for its live-action Fallout series — and, man, it sure does look like Fallout. The show is set in Los Angeles 200 years after the nuclear apocalypse, and it stars Yellowjackets actor Ella Purnell, plus Walton Goggins, Aaron Moten and Kyle MacLachlan. It’s heading to Prime Video on April 12, 2024.

GTA VI is coming in 2025

The biggest news item this week, pre-The Game Awards, was the first official trailer for Grand Theft Auto VI. As of writing it’s already reached 105 million views on YouTube — a pace usually reserved for only the finest K-Pop videos. GTA VI is set in Vice City, it’s coming out in 2025 and I’m sure we’ll hear a lot more about it before then.

Nexon

What is an indie game?

The meat of this week’s episode focuses on the longstanding debate about what “indie” actually means. One of the titles nominated for Best Independent Game at the Game Awards, Dave the Diver, was commissioned and bankrolled by Nexon, one of the largest video game studios in South Korea. It’s not indie, and its inclusion in this category highlights how little consensus there still is around the definition.

This is kinda my area of expertise — it’s my 13th year as a video game journalist and indie games have always been a core feature of my reporting. I’ve spent a lot of time thinking about what I mean when I say “indie,” so I sat down and formalized this thought process. There are three questions that can help define a game in an indie gray area: Is the team on the mainstream system’s payroll? Is the game or team owned by a platform holder? Do the artists have creative control? I dug into these questions this week, and discuss how having a publisher isn’t related to the indie label at all.

But when all else fails in the indie debate, there’s one ultimate question to ask: Can this team exist without my support? This is why the distinction matters: The indie label helps to identify the artists that would not exist without game sales, crowdfunding or word-of-mouth support from players. It exists to determine the teams that are truly living and dying on game sales, and it helps players decide where to spend their money. If Dave the Diver didn’t sell well, its team would likely have the chance to try again. If, say, Pizza Tower didn’t sell well, its studio could have folded.

I think this is an important conversation, so give that story a read and let us know in the comments if you think my questions help or just make things more confusing. It’s probably a little bit of both.

Now playing

I’ve been thoroughly enjoying The Cosmic Wheel Sisterhood on Steam Deck — it’s the latest game from Deconstructeam, the indie studio that made The Red Strings Club and Gods Will Be Watching. The Cosmic Wheel Sisterhood is a game about building tarot decks, manipulating elections, betraying a coven of witches and seducing everyone; it’s sexy and well-written, and I highly recommend it. Another game I’m looking forward to is A Highland Song from indie studio Inkle; it just came out this week and I’m excited to dive in.

Let us know in the comments what you’re playing! Also, we still don’t know what to call this weekly video game news show, so leave us some name suggestions, too. Thanks!

This article originally appeared on Engadget at https://www.engadget.com/gta-6-the-game-awards-and-the-great-indie-debate–this-weeks-gaming-news-153051306.html?src=rss 

Generated by Feedzy
Exit mobile version