EXPERT ADVICE TO HELP YOU FIND CLARITY IN THE CHAOS OF A COMPLEX MARKETING LANDSCAPE.
EXPERT ADVICE TO HELP YOU FIND CLARITY IN THE CHAOS OF A COMPLEX MARKETING LANDSCAPE.
We make technology accessible!
As evidenced by the slow death of Cortana, it’s clear that the AI assistants of yesteryear aren’t meeting expectations. And so they’re being remade.
Amazon is building a new large language model akin to OpenAI’s GPT-4 to power its Alexa voice assistant. Meanwhile, Google is reportedly planning to “supercharge” Google Assistant with AI that’s more like Bard, its algorithm-powered chatbot.
The paradigm shift hasn’t been limited to the realm of Big Tech. Startups, too, are beginning to realize their own versions of more helpful, useful AI assistants.
One of the more intriguing ones I’ve stumbled upon is Moemate, an assistant that runs on most any macOS, Windows and Linux machine. Taking the form of an anime-style avatar, Moemate — powered by a combo of models including GPT-4 and Anthropic’s Claude — aims to supply and vocalize the best answer to any question a user asks of it. (“Moe” is a Japanese word relating to cuteness, often in anime.)
That’s not especially novel; ChatGPT does this already, as do Bard, Bing Chat and the countless other chatbots out there. But what sets Moemate apart, is its ability to go beyond text prompts and look directly what’s happening on a PC’s screen.
Sound like a privacy risk? You betcha. Webaverse, the company behind Moemate, claims it stores much of the assistant’s chat logs and preferences locally, on-device. But its privacy policy also reveals that it reserves the right to use the data it does collect, like PC specs and unique identifiers, in compliance with legal requests and investigating suspected illegal activities. Fundamentally, giving software like this access to everything you see and do is, even in the best-case scenario, a considerable risk.
Nevertheless, curiosity spurred me to forge ahead and install Moemate, which is currently in open beta, on my work-supplied Mac notebook.
The initial research papers date back to 2018, but for most, the notion of liquid networks (or liquid neural networks) is a new one. It was “Liquid Time-constant Networks,” published at the tail end of 2020, that put the work on other researchers’ radar. In the intervening time, the paper’s authors have presented the work to a wider audience through a series of lectures.
Ramin Hasani’s TEDx talk at MIT is one of the best examples. Hasani is the Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT, and served as the paper’s lead author.
“These are neural networks that can stay adaptable, even after training,” Hasani says in the video, which appeared online in January. When you train these neural networks, they can still adapt themselves based on the incoming inputs that they receive.”
The “liquid” bit is a reference to the flexibility/adaptability. That’s a big piece of this. Another big difference is size. “Everyone talks about scaling up their network,” Hasani notes. “We want to scale down, to have fewer but richer nodes.” MIT says, for example, that a team was able to drive a car through a combination of a perception module and liquid neural networks comprised of a mere 19 nodes, down from “noisier” networks that can, say, have 100,000.
“A differential equation describes each node of that system,” the school explained last year. “With the closed-form solution, if you replace it inside this network, it would give you the exact behavior, as it’s a good approximation of the actual dynamics of the system. They can thus solve the problem with an even lower number of neurons, which means it would be faster and less computationally expensive.”
The concept first crossed my radar by way of its potential applications in the robotics world. In fact, robotics make a small cameo in that paper when discussing potential real-world use. “Accordingly,” it notes, “a natural application domain would be the control of robots in continuous-time observation and action spaces where causal structures such as LTCs [Liquid Time-Constant Networks] can help improve reasoning.”
The world of technology can be fast-paced and scary. That's why our goal is to provide an experience that is tailored to your company's needs. No matter the budget, we pride ourselves on providing professional customer service. We guarantee you will be satisfied with our work.
Please contact us if you cannot find an answer to your question.
F-21, 2nd Floor, Tulips
Begur Main Road,
Bangalore, KA
India 560068
Open today | 09:00 am – 05:00 pm |
Sign up to hear from us about specials, sales, and events.
Dealfairs
Copyright © 2023 Dealfairs - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.