Yeah, it’s been a while since I wrote a simple column about a cool new technology that I’ve come across. But let’s face it — right now, there’s not a tremendous amount of really groundbreaking stuff out there. Now I know that PR directors (or hired guns) all over the Web will probably take major issue with that statement, sure that their technology is The Next Next Big Thing. Or at least that’s what they’ve been paid to say.
Overall, though, the downturn in the new economy seems to have had a dampening effect on innovation. Venture capitalists (VCs) just don’t seem to be funding every dreamer with a Big New Idea like they used to, and previously innovative (revenue-model lacking) concepts such as Third Voice are shutting down their consumer operations or going out of business entirely.
It’s made my job tough. You can write only so much about some new minor tweak of a database administration tool, firewall setup, HTML editor, or other evolutionary advances. Sure, those improvements might get deep geeks jazzed, but they don’t have much of a “Wow!” factor.
Maybe I’m jaded. But there hasn’t been a press release or tech demo that’s crossed my vision in the past six months or so that’s really gotten me jazzed up. Until I saw the Anthropics Technology demo.
I had just finished doing my panel discussion thing at a wireless conference and had begun packing up when I noticed a hand thrust my way. Not knowing what else to do, I shook it, and the guy introduced himself, handed me the obligatory biz card, smiled, and shoved a Compaq iPAQ in my face.
I like the direct approach, so I glanced down at his handheld device. (For the uninitiated, an iPAQ sort of looks like a silver PalmPilot with a color screen.) There, on the screen, in fairly smooth video, was Marilyn Monroe talking about some new product. Yep, Marilyn. She smiled at all the right places, pronounced all the words clearly, and seemed to know what she was talking about. It wasn’t her trademark breathy voice coming out of the speakers, but still, the effect was uncanny…
“Do you want to see the Mona Lisa talking?” he then asked, flicking the stylus on the screen. In a flash, the Mona Lisa herself was giving the same product demo, enigmatic smile and all. “How…,” I stammered.
“Check this out,” the guy said, flicking at the screen again and bringing up a fairly normal-looking woman giving the same pitch. And then it dawned on me. This was the original video and was being used to drive the Marilyn and Mona Lisa videos.
Pretty cool stuff.
How does it work? First, the Anthropics folks take a video of a person (any person) talking. Their software then records the subject’s “personality” by keying in on facial elements such as eyes, mouth, cheekbones, and eyebrows. This personality recording can then be mapped directly to a still image, animating that image with the facial expressions of the original subject.
But that’s not all. To show the final product, Anthropics has developed a proprietary Java technology called Synthactor that streams the content to just about any device — including wireless devices, such as personal digital assistants (PDAs) and cell phones — that can handle Java. Since most of the image animation takes place on the client device, the frame rate is determined not by the bandwidth of the signal but by the processing power of the receiving device. Faster chips mean better frame rates. Slow connections don’t affect the frame rate at all.
This thing is pretty darn cool and has a lot of potential for driving things such as customer service applications, online newscasts, and other entertainment applications. Increasingly, it’s starting to look like those of us in the U.S. will be waiting awhile until broadband wireless services are widely deployed. Until then, products such as Anthropics’ will do a good job of showing us what the future may look like.