Just talked with Max Headroom and Michael Scott - my wife is an office fan so knows the references, and I know enough Max to ask the right things.
Overall, a fun experience. I think that MH was better than Scott. Max was missing the glitches and moving background but I'd imagine both of those are technically challenging to achieve.
Michael Scott's mouth seemed a bit wrong - I was thinking Michael J Fox but my wife then corrected that with Jason Bateman - which is much more like it. He knew Office references alright, but wasn't quite Steve Carell enough.
The default while it was listening could do with some work, I think - that was the least convincing bit; for Max he would have just glitched or even been completely still I would think. Michael Scott seemed too synthetic at this point.
Don't get me wrong, this was pretty clever and I enjoyed it, just trying to say what I found lacking without trying to sound like I could do better (which I couldn't!).
Thanks for the feedback. This is definitely a demo where every piece matters for maximizing the enjoyment factor. We spent the most effort on optimizing video quality and latency, but not a lot on tweaking the character prompts that go into the LLM. Turns out that matters a lot too.
This is impressive. The video chat works well. It is just a hair away from a very comfortable conversation. I'm excited to see where you have it a year from now, if it turns out to be financially viable. Good luck!
Thank you! Very much agree that we need to improve speed to make the conversation more comfortable. Our target is <2sec latency (as measured by time to first byte). The other building blocks of the stack (like interruption handling, etc) will get better in the coming months as well. In 1 year things should feel like the equivalent of a zoom conversation with another human.
I am very much fascinated by this virtual avatar talking thing. I tried video-retalking https://github.com/OpenTalker/video-retalking just to see how far I can make it work to make a talking avatar but it is tremendously difficult. But this holds tremendous possibilities and I hope it can be eventually cheaper to run such models. I know this is far superior and probably a lot different but I hope to find open source solutions like Lemon Slice someday that I can experiment with.
Nice! Thanks for sharing. I hadn't seen that paper before. Looks like they take in a real-world video and then re-generate the mouth to get to lip synch. In our solution, we take in an image and then generate the entire video.
I am sure they will have open source solutions for fully-generated real-time video within the next year. We also plan to provide an API for our solution at some point.
For the input, we pass the model: 1) embedded audio and 2) a single image (encoded with a causal VAE). The model outputs the final RGB video directly.
The key technical unlock was getting the model to generate a video faster than real-time. This allows us to stream video directly to the user. We do this by recursively generating the video, always using the last few frames of the previous output to condition the next output. We have some tricks to make sure the video stays relatively stable even with recursion.
Nice find! I hand't seen this before (and will take a deeper look later). It looks like this is an approach to better utilize the GPU memory. And, we would probably benefit from this to get more of a speed-up, which would also help us get better video quality.
I do not think they are running in real time though. From the website: "Personal RTX 4090 generates at speed 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache)." That means it would take them 37.5s to generate 1 second of video, which is fast for video but way slower than real time.
Yep, this is way slower but considered SOTA on video-gen open source.
I mostly meant the using the previous frames to generate new frames insight that reminded me but lack knowledge on the specifics of the work
glad if its useful for your work/research to check out the paper
edit: the real-time-ness of it also has to have into equation what HW are you running your model on, obviously easier to make so on a H100 than a 3090, but these memory optimizations really help to make these models usable at all for local stuff, which is a great win i think for overall adoption/further stuff being build upon them a bit like sd-webui from automatic1111 alongside stable diffusion weights models being open sourced was a boom on image gen a couple years back
Nice! What infra do you use for inference? I'm wondering what the cost-effective platforms are for projects like this. GPUs on AWS and Azure are incredibly expensive for personal use.
We use modal (https://modal.com/). They give us GPUs on-demand, which is critical for us so we are only paying for what we are using. Pricing is about $2/hr per GPU (as a baseline of the costs). Long story short, things get VERY expensive quickly.
And yes, exactly. In between each character interaction we need to do speech-to-text, LLM, text-to-speech, and then our video model. All of it happens in a continuously streaming pipeline.
IMO, most videos models will be fully real time within 2 years. You will be able to pick a model, imagine any world and then be fully immersed in it. Walk around any city interacting with people, first person shooter games on any map with crazy monsters, or just let the model auto-pilot an adventure for you.
Probably not, but even if so, how much will that cost? There's AI that will take a pronpt like "what's the best weed whacker in 2025" and build a whole web page to publish the review. It's great, awesome. $10 in tokens to do that.
And that's probably still a subsidized cost!
Bfw "what is the best weed whacker" is John C. Dvorak's "AI test"
So basically the old open-source live-portrait hooked up with audio output. Was very glitchy and low res on my side. btw: Wondering if it's legal to use characters you don't have rights to. (how do you justify possible IP infringement)
One way this differs is in the model architecture. Our approach relies on a single pass of a diffusion transformer (DiT), whereas Live Portrait relies on intermediate representations and multiple distinct modules. Getting a DiT to be real-time was a big part of our work. Quoting the Live Portrait paper: "Diffusion-based portrait animation methods [...] are usually [too] computationally expensive." As you hinted at, we had to compromise on resolution to get there (this demo is 256x256), but we think that will improve over time.
> Wondering if it's legal to use characters you don't have rights to. (how do you justify possible IP infringement)
IP law tends to be "richer party wins". There's going to be a bunch of huge fights over this, as both individual artists and content megacorps are furious about this copyright infringment, but OpenAI and friends will get the "we're a hundred-billion-dollar company, we can buy our own legislation" treatment.
This is fantastic. I was the founder of the 3D Avatar Store, a company that was doing similar things 15 years ago with 3D reconstructions of people. Your platform is what I was trying to build back then, but at the time nobody thought such tech was possible, or they seriously wanted to make porn, and we refused. I'll try reaching out through channels to connect with your team. I come from a feature film VFX, Academy Award quality work, so it would be interesting to discuss. Plus, I've not been idle since the 3D Avatar Store, not at all...
We've been very inspired by interactive character experiences powered by traditional VFX + puppetry (turtle talk with crush is a favorite). I think that sort of interactive entertainment will become more commonplace as tech like ours continues to improve. Looking forward to connecting!
We wouldn't build it ourselves, but there are several companies like Etched, Groq, and Cerebras working on purpose-built hardware for transformer models. Here's more: https://www.etched.com/announcing-etched
Hmm, plug this together with a app which collects photos and chats with a deceased love one and you have a working Malachim. Might be worth a shot.
Impressive technology - impressive demo! Sadly, the conversation seems to be a little bit overplayed. Might be worth plugging ChatGPT or some better LLM in the logic section.,
Thanks for the feedback. Optimizing for speed meant we had fewer LLMs to choose from. OpenAI had surprisingly high variance in latency, which made it unusable for this demo. I think we could probably do a better job with prompting for some of the characters.
Thanks for trying it out! character.ai has put their model behind a waitlist, so it's hard to compare. As far as I can tell, they don't appear to make any specific claims about speed or interactivity in their press release.
Hey, this looks really cool! I'm wondering - what happens if you feed it something totally different like a Van Gogh painting or anime character? Have you tested any non-photo inputs?
this was overall fun. better than expected. i'm an office fan so tried dwight and michael scott.
i hope you folks get better at this. excited to see where you get in the next 12 months or so. Godspeed!
love the demo video with Andrew. showing the potential as well as the delays and awkwardness of AI is refreshing compared to the heavily edited hype reels that are so common
I spent about 2 hours recording videos with different characters. Of course, the one I made as a joke for myself and never intended to share was the most enjoyable to watch :)
We just removed email signup. You can try it out now without logging in. It was easier than expected to do technically, so we just shipped a quick update.
Glad you like it! IMO, biggest things to improve on are 1) time to video response and 2) being able to generate more complicated videos (2 people talking to each other, a person walking + talking, scene cuts while talking).
I think it's not going well? I keep getting to the start a new call page, it fails, and takes me back to the live page. I assume your servers are on fire, but implementing some messaging would help ("come back later") or even better, a queueing system ("you're N in line") would help a lot.
Yes. We use Modal (https://modal.com/), and are big fans of them. They are very ergonomic for development, and allow us to request GPU instances on demand. Currently, we are running our real-time model on A100s.
I see you are paying $2/h. Shoot me an email at victor ta borg.games if your model would fit on RTX 3090 24G to get it down to $0.2/h (fellow startup).
Good question. I guess depends on how many users we get. Each users gets their own dedicated GPU. Most video generations systems (and AI systems in general) can share GPUs during generation. Since we are real time, we don't do that. So, each user minute is a GPU minute. This is the biggest driver of the cost.
feels like the next logical step for you to bring enconomies of scale is to allow users generating the video to automatically stream it to n platforms, so each gpu can be generating 1 png for many humans to watch simultaneously, with maybe 1 human driving the seat on what to generate, or more ai, idk
that's a good idea! Would be especially cool if the human is charismatic and does a good job driving the convo. Maybe we can try it out with a streamer.