Introducing Virtual Protocol’s Audio-to-Animation (A2A) Subnet on Bittensor!

Introducing Virtual Protocol’s Audio-to-Animation (A2A) Subnet on Bittensor!

Why do we need A2A model for autonomous AI agents?

At Virtual Protocol, our mission has always been to build the largest society of on-chain AI agents, each equipped with their own cognitive, voice, visual, and memory capabilities. We envision a world where virtual AI agents, can authentically react to external stimuli, including audio cues, enabling them to act autonomously in the virtual worlds. To us, Audio-to-Animation (A2A) capability is one of the most important modular cores of an autonomous AI agent.

Partnership with Bittensor

We believe in the ethos of decentralization and are impressed with the capability of the Bittensor @opentensor open-source AI community. To accelerate our internal R&D in the Audio-to-Animation (A2A) space, we have decided to launch an Audio-to-Animation model subnet on Bittensor.

Looking ahead

The applications of these dynamic movements are manifold, spanning diverse use cases such as virtual companions, on-chain gaming agents, livestreaming AI idols, and more. As we move ahead, the development of an open-source A2A model will generate outputs for integration into millions of onchain AI agents traversing different applications.

The vision for the Audio-to-Animation subnet stretches far beyond just character movements, but instead also lay the groundwork for an Audio-to-Video neural network (think decentralised Sora) that sculpts the entire visual experiences. This would open up a whole new realm of creative possibilities, from movies to media to films and beyond.

Subnet is now on Bittensor testnet, please join us!

Virtual Protocol’s Audio-to-Animation model is now live on Bittensor testnet, with the UID 142. Join us and learn more about the subnet here:

Our audio-to-animation subnet will be powering at least two applications incubated by Virtuals Protocol:

THE SCOOP

Keep up with the latest news, trends, charts and views on crypto and DeFi with a new biweekly newsletter from The Block's Frank Chaparro

By signing-up you agree to our Terms of Service and Privacy Policy
By signing-up you agree to our Terms of Service and Privacy Policy

Virtual Protocol’s Audio-to-Animation model is now live on Bittensor testnet, with the UID 142. Join us and learn more about the subnet here: https://x.com/virtuals_io/status/1785345062311960843

Website:

https://tao.virtuals.io

Whitepaper:

https://whitepaper.virtuals.io/audio-to-animation-bittensor-

This post is commissioned by Virtuals Protocol and does not serve as a testimonial or endorsement by The Block. This post is for informational purposes only and should not be relied upon as a basis for investment, tax, legal or other advice. You should conduct your own research and consult independent counsel and advisors on the matters discussed within this post. Past performance of any asset is not indicative of future results.


Disclaimer: The Block is an independent media outlet that delivers news, research, and data. As of November 2023, Foresight Ventures is a majority investor of The Block. Foresight Ventures invests in other companies in the crypto space. Crypto exchange Bitget is an anchor LP for Foresight Ventures. The Block continues to operate independently to deliver objective, impactful, and timely information about the crypto industry. Here are our current financial disclosures.

© 2023 The Block. All Rights Reserved. This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.