Google officially introduced its most capable large language model to date, Gemini. CEO Sundar Pichai said it’s the first of “a new generation of AI models, inspired by the way people understand and interact with the world.” Of course, it’s all very complex, but Google’s multimillion-dollar investment in AI has created a model more flexible than anything before it. Let’s break it down.
The system has been developed from the ground up as an integrated multimodal AI. As Engadget’s Andrew Tarantola puts it, “think of many foundational AI models as groups of smaller models all stacked together.” Gemini is trained to seamlessly understand and reason on all kinds of inputs, and this should make it pretty capable in the face of complex coding requests and even physics problems.
Gemini is being ‘made’ into three sizes: Nano, Pro and Ultra. Nano is on-device, and Pro will fold into Google’s chatbot, Bard. The improved Bard chatbot will be available in the same 170 countries and territories as the existing service. Gemini Pro apparently outscored the earlier model, which initially powered ChatGPT, called GPT-3.5, on six of eight AI benchmarks. However, there are no comparisons yet between OpenAI’s dominant chatbot running on GPT-4 and this new challenger.
Meanwhile, Gemini Ultra, which won’t be available until at least 2024, scored higher than any other model, including GPT-4 on some benchmark tests. However, this Ultra flavor reportedly requires additional testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for further testing and feedback.
— Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The biggest stories you might have missed
A new report says ‘the world is on a disastrous trajectory,’ due to climate change
Google’s Gemini AI is coming to Android
How to use Personal Voice on iPhone with iOS 17
Half of London’s famous black cab fleet are now EVs
AMD’s Ryzen 8040 chips remind Intel it’s falling behind in AI PCs
Could MEMS be the next big leap in headphone technology?
The first affordable headphones with MEMS drivers have arrived
Creative’s Aurvana Ace line brings new speaker technology to the mainstream.
The headphone industry isn’t known for its rapid evolution, which makes the arrival of the Creative’s Aurvana Ace headphones — the first wireless buds with MEMS drivers — notable. MEMS-based headphones need a small amount of “bias” power to work and while Singularity used a dedicated DAC with a specific xMEMS “mode,” Creative uses an amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. If MEMS is to catch on, it has to be compatible with true wireless headphones.
Apple and Google are probably spying on your push notifications
But the DOJ won’t let them fess up.
Foreign governments likely spy on your smartphone use, and now Senator Ron Wyden’s office is pushing for Apple and Google to reveal how exactly that works. Push notifications, the dings you get from apps calling your attention back to your phone, may be handed over from a company to government services if asked.
“Because Apple and Google deliver push notification data, they can be secretly compelled by governments to hand over this information,” Wyden wrote in the letter on Wednesday.
Apple claims it was suppressed from coming clean about this process, which is why Wyden’s letter specifically targets the Department of Justice. “In this case, the federal government prohibited us from sharing any information, and now this method has become public, we are updating our transparency reporting to detail these kinds of request,” Apple said in a statement to Engadget. Meanwhile, Google said it shared “the Senator’s commitment to keeping users informed about these requests.”
Researchers develop under-the-skin implant to treat Type 1 diabetes
The device can secrete insulin to cells.
Scientists have developed a new implantable device that could change the way Type 1 diabetics receive insulin. The thread-like implant, or SHEATH (Subcutaneous Host-Enabled Alginate THread), is installed in a two-step process, which ultimately leads to the deployment of “islet devices,” derived from the cells that produce insulin in our bodies naturally. A 10-centimeter-long islet device secretes insulin through islet cells that form around it, while also receiving nutrients and oxygen from blood vessels to stay alive. Because the islet devices eventually need to be removed, the researchers are still working on ways to maximize the exchange of nutrients and oxygen in large-animal models — and eventually patients.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-googles-gemini-is-the-companys-answer-to-chatgpt-121531424.html?src=rss