Getting to Know Redshift for Cinema 4D

October 6, 2017 - By 

In this video, Chad introduces you to Redshift and some of the underlying concepts behind this impressive new GPU renderer.

Learn More about HDRI Link

Check out Redshift Here

Greyscalegorilla Around The Web:

Posted In:  
  • Thanks for the intro overview. I’ve been hesitating, but as result of this take, ready to plunge.
    Played with demo, but ignorant of various sampling idiosyncrasies, and was disappointed with results.
    Now I understand a little better.
    Thanks ?

  • I meant, thanks! ?

  • Arnold is cooler it’s for me. And arnold haved a mac OS High 10.13 operating system on Mac!

  • Thanks this really helped me to understand the unified sampler.
    Starting to meet walls with octane in productions now, so getting eager to try redshift!

  • This was great! It would be awesome if you could do a breakdown of that material on the mech.

  • Thanks Chad for this awesome Redshift tutorial!!!
    Please more Redshift tutorials in the future

  • Amazing. Clarification regarding samples / render settings. I’m in love with RS. Thank you Chad.

  • Chad Ashley,
    During the Twitch cast you have a shortcut setup ( alt W C ) so any node can be viewed via the Surface Output.
    How would I set this up ?

  • Chad Ashley,
    What do you prefer, arnold or redshift? if redshift, is redshift compatible with teamrender? tnx

  • Thanks Chad! You mentioned you’d share the link to the model, but I don’t see it. I was really hoping for a link to the project file and wanted know if there’s a links section I’m overlooking?

    Thanks again for the helpful stream. I’m excited to dive into the mech material.


  • Thank you, very very Cool Tutorial…. Can you please show more tutorials about Redshift?? Thanks!!!!!

  • Hey Chad,

    I’m prepping for a short film, and I’m trying to decide on a few things relating to the intended vfx workflows for a liveaction-to-cg replace. (A main character is a humanoid android) — Because we’re mastering the film above 4k, I’m planning on building my deformations and textures in zBrush at an extremely high frequency of detail. I’m starting to learn that I will benefit from using multi-UDIM to bring that detail into the renders at the highest quality. Does Redshift for C4D support this process? If not, would I technically be able to hack it by offsetting the position of multiple textures that are set to not tile?

    I’m my own (entire) VFX team, so I’m hoping to keep the entirety of my pipeline between PFtrack (match move & object tracking), zBrush (model, texture), Cinema (rig, anim, render), and AE (comp)… since those are the tools I am most familiar with.

    Trying to figure out what render engine is going to serve me best… since I’ll be learning the tools specifically for this project.. haven’t worked with 3rd party render engines yet.

    Most of the learning available around the net is focused on gaming, and there is very little in the way of motion picture centric VFX workflows — especially not centered around an entire project’s pipeline… leaving me, as a self-educated artist, with many unanswered questions about some of the process.

    Thankfully, this isn’t going into principle until early next year, giving me a solid amount of dev time to work things out.

    I would absolutely love to get your thoughts on this.

    • Wow, that’s a helluva delivery size for a short film. Why do they need to deliver at above 4k? This will most certainly effect cost on both vfx and general post. As for Redshift supporting UDIM textures, a quick search in their documentation resulted in this

      You may want to look at Mari for texture creation. It handles large textures with ease. I’ve not used it myself, nor have I ever worked in a pipeline with UDIM textures. Sounds like you have an exciting road ahead of you!

      • Yeah – even though it’s short form, it is also “branded content” and is being shot at 8k. So aside from the ever popular “future proofing” reason, and blowing up to projection prints along with the DI; the client publishes a yearly large format book and wants to pull stills from the film rather than having “accompanying stills” shot separately — since we’re at the convergence of resolution between stills and motion pictures (Thanks, RED!)… all of those things (Especially the ability to QC through the down-sampling process from a 4k+ master to the DCP) brought along the desire to approach finishing in this way.

        So perhaps I’ll need to use Mari as an intermediate between zBrush and C4D. As you mentioned in your recent pod-cast… time to go learn something just for that one thing I need it for. Definitely an exciting road ahead!

        Thanks for helping me along this one small step.

  • Do you guys know… will Redshift not work with an AMD Radeon R9 on a 2015 MBP? I have installed it but it is not showing up, in R16…

    • Hey Jamie, Redshift currently only supports Nvidia GPUs with CUDA rendering capability. But it looks like they are planning on supporting AMD gpus in the future.

      From the website –
      “Redshift requires an NVIDIA GPU with CUDA compute capability 2.0 or higher and 2GB VRAM or more.”
      “Support for other GPU platforms (such as AMD GPUs) is planned for the future.”

  • Hey Chad, I don’t see all the IPR options in my RS IPR window that you have in the demo. Am I missing something?

    – RS 2.5.41
    – C4D R19
    – MacBook Pro
    – Titan XP eGPU x 2

    • Hmm…It’s a bit confusing but there is IPR and Render View. The one I use in my video is the Render View or “RV”. Have you tried that?

  • why you use dome light to load hdri? that so long for render time #IMHO
    why you can’t put hdr in Environment?
    That so fast rendering and same qulity 🙂

    • I must say that I was skeptical about your speed claim but ultimately found it to be mostly true…BUT only in some cases. Because the Env Map IBL method requires GI, you end up having to use quite a few BF GI Rays to get clean results. This can be faster depending on the complexity of your scene (more areas for light to bounce or get trapped would require even more Rays). Whereas a dome light does not require GI but requires light samples to get a clean image. So if your dome light had enough samples to get a clean result in a scene with no GI, it would require fewer GI Rays to clean up if/when you turned GI on. So there is a bit of a trade-off. I still prefer to use dome lights specifically because of the control they offer and the realistic results they give that do not always require GI to be turned on. Thanks for the comment and giving me something to ponder. Have a great day.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Blog Categories

    Follow us on Instagram