Motion Magnification Technology

March 19, 2013 - By 

I saw this video recently and I can’t get it out of my mind. Like a lot of technology, it looks like it was originally built for science, but I can see this filtering down pretty quickly. I can imagine some pretty cool things being done with this tech for VFX/Motion Graphics. What is amazing, is that it works on existing footage. No need for a special camera. For some reason, it seems like this would be right up Michel Gondry’s wheel house.

How could this new technology be applied to VFX or Motion graphics?

Posted In:  Ideas Motion Design
40  comments
40 Comments
  • With mobile devices, I could see this as a way to have the content respond to how you are responding to it. So if you pulse increases the animation, and tempo of the music slows to calm you. I think in the future we will be creating content that won’t be baked to a final output render, but is dynamic and changes based on device and viewer.

  • Ow, common Nick. This is a hoax that has got a lot of people falling for it.

    Motiontracking individual pixels? Fine.
    Tracking there colorshifts? Ok.

    But reading from that colorshift how fast their blood pulses? No way.
    There is too much iterference from light sources aroundthe person.

    I dont believe this for one bit!

    • Well it’s obviously not ‘fake’ since the MIT guys are giving away the source code for free and you can go to their website and actually use it right now.

      How accurate it is at doing what it says is left to be discovered by the community. If you’re skeptical, you can always try it out yourself.

      • You’re right. There’s also the Cardiio app (www.cardiio.com) that is based on MIT technology. I’ve been using it and it works great for me.

    • Entao veja o App iPhone “Cardio Buddy” e se surpreenda

    • How about researching before making an ignorant comment next time.

  • Hi Nick,

    Maybe I can answer that this new way of filming could be assimilate in teh same as sound design, techniques like something called ‘micro-sampling’… You know, extracting things like snippets of sounds and create a all new way of composing music, especially in minimal music.
    Maybe there’s a link to download the code from MIT (as it’s seems in the Youtube’s video) and have a look how it works or how to hack it for some purposes… Besides, the examples could be useful pieces of inspiration to design tiny motions on simple objects like circles, spheres, cubes and so on…
    Thank you to share us this interesting ressource.

  • Like someone else said, it´s obviously fake, what we can measure for sure with this is if people are becoming more and more stupid, and the answer is yes.

    • Perhaps you (Micke) are just not near bright enough to understand some of the work that the people at MIT have put out…? Would you have called something like the microscope “obviously fake” when that was first introduced to the world? The telescope, the telephone? We could list endless examples of new technologies that fit this same pattern.

    • It’s not fake, but it is hard to get to work. You need to specify a number of parameters and it isn’t entirely clear what the optimal parameters are unless you are familiar with the software. It also only works if you are completely still. Light sources are also a serious problem – if you film yourself sitting in front of a computer screen, you will get artefacts. You can upload your own videos and implement the magnification at: http://videoscope.qrclab.com/vidhome.html

      I suggest a lot of playing around and trial and error

  • I think some people need to explain more. And review their own concepts of obvious.

  • Mind blowing! Thanks for sharing

  • Yeah! its Fake! because math is hard!….

    err… wait.. no…
    some people are actually smart and can do math
    SCIENCE!!!!!

    http://videoscope.qrclab.com/

    • No way man! It’s fake because I don’t understand it! LALALALA (covers ears and shuts eyes)

      • No, it´s fake because i _do_ understand…that there is no single “it” that can verify this.

        I get the theory behind it, but if you have worked with photo/lighting/video and so on your whole life, or if you just sit down to think about it for 15 minutes, you realize that this can never be done. Several factors comes into play.

        What about video compression errors? What about changes in lighting? Does it work regardless of compression and quality?
        The idea that this would work through thick heavy studio make-up for example is ridiculous. To name but a few “problems” you would run in to.

  • Dunno if it’s real or fake, but if it is real, surely it could help with lie detectors? As far as VFX goes, could be cool for music videos or dream sequences ect.
    I hope it’s real

  • GreyscaleGorilla is the best VFX blog out there. Greyscale could you please look at some after effects I did and comment on it. I’ve been working with AE for quite some time, but suck at Design (like you’ve mentioned) Here’s a clip from the Channel Branding I’m doing for my job https://vimeo.com/59025016

  • Looks interesting. I tried it and the images I got back were very grainy. I could see this being really cool for creatives in a few years. I wonder if it could eventually be used to interpolate video footage and add ‘artificial’ frames more accurately then our current technology.

    For anybody calling this ‘fake’. It’s a research project by MIT and the research is available online for free at http://videoscope.qrclab.com. You can actually go there / use it / test it yourself.

    • I can go there to test the _effect_, but i can never test nor verify that what they say is true. I don´t doubt the “effect”. It would be like seeing an unidentified flying object, and saying i don´t believe in UFO´s.

      I believe in UFO´s, i just don´t “buy” they´re from outer space, driven by ET, even if NASA or MIT says so…

  • I wonder if it could be applied to the higher end of VFX technology, like motion capture. Or to create a plugin that creates some weird effect. Like “Magic Bullet Magic Mini Morph” 😀

  • seems.. seams?.. like it’d be the end of X-RAY? or live analysis.

    i’m curious what is would look like if you backlit the crap out of your hand or finger (ET-flashlight-style)

    kind of a cool look for x-men like characters. you could sync your cg stuff to the actors actual pulse.

    future lance armstrongs are hating this.

    i could see video applications for it with boxing or mma.. heart rate comparisons, blood collection exaggeration when someone is being choked out could be a cool replay.

  • wow thats cool, ive enrolled to a course in coursera about almost that heres the link if you want to check, superinteresting by the way https://www.coursera.org/course/images

  • – Wicked face morph would be cool!
    – Would generate great result if used on macro objects! o.0 , different insects of different color, and boom…rapdily changing random colors in one shot.

  • Looks a bit like sped-up slowmotion.

  • I’m seeing this used to get all kinds of data while motion tracking people, and using that to help guide UV’s or maybe just mattes in compositing for anyone that aims at photo realism in a piece.

  • I really think this kind of technology will help us with the uncanny valley. I think one main problems with why CG things look fake is because of the micro-movements our eyes can detect but we don’t consciously pick up on. being able to add in micro movements might help to fix that.

  • As a father of 2 little girls, I can say that I’d have loved to have this technology when they were infants – because they were so still when they slept that I’d start having that thought of OMG are they breathing? I’m wondering how this would transition into motion graphics. It’s like the ultimate Wiggle expression.

  • wondering if this could be done in after effects?

  • It seems the code is adjusting the pixel color value of each pixel. This is exactly what AE plugins do. SO I think in order to see any usefulness you would have to be expanding visually on the pixels value changing in some exaggerated way. This might need to be married to some rationale for making use of the particular application in your shot in the first place. For instance, I am thinking of PREDATOR, maybe it would be cool to upgrade his inviso-transparency camoflage by showing traces of his heart beat pulsing through the inviso effect…

  • This looks like the cool stuff RE:Vision FX has been working on, with a different focus obviously. Maybe we’ll see something like this in an upcoming AE plugin 🙂 .

  • This is actually shown in much further depth on this fxguidetv episode: http://fxguide.com/fxguidetv/fxguidetv-168-cvmp-revealing-the-invisible/

    It’s really fascinating stuff!

  • Apply that shiz to an episode of The Wonder Years to accentuate Kevin Arnold’s face reddening every time he does something embarrassing.

  • coming soon… invisible motion plug in for after effects by Red Giant. I can just see it now.

  • I could see this being used to screen people at airports and other points of entry. Get it running realtime and it could be used to pick out people with elevated heart rates for closer screening.

  • just found this article about measuring pulse by phone cameras and thought this would just fit in.

    http://www.digitalartsonline.co.uk/news/interactive-design/new-tech-takes-your-pulse-with-your-phones-camera/

    “The company aims to commercialize the technology within the next 12 months. It will first be released in Japan, and the company is considering launching it internationally as well but has no solid plans.”

    cheers, tobi

  • Hmm… I don`t think it is fake… But like all good things in science, this techique can also be used negatively.

    Controlling peoples´”micro-reactions” could make us even more vulnerable in our privacy than we already are. Your iPhone, FB, etc. tells anybody where you´ve been, what you´ve done… Maybe people simply don´t want to be read like a book, for example when you apply for a job…

  • Hi, I was thinking about the “uncanny valley” when i saw this. Information that the eye can not see directly, but that the brain can perceive and use to proof healthy…Maybe we have to try integrate this infos layer into 3d app, with a lot of other undercover infos to make most belivable CG human.

  • Hi Nick. Very cool technology. I wonder if this process of using a vertex map to colorize polygons, described by Yader, could be used to simulate this effect in C4D animations – https://vimeo.com/59268918

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Categories

    Follow us on Instagram