There are many render engines out there nowadays and it’s not easy to choose the right one. CPU or GPU, biased or unbiased. In this video we discuss the pros and cons of different render engines from our humble point of view. As a bonus we talk about LUTs and view transforms.
Nerd Rant: Render Engines
Part of: Nerd Rant
Nice discussion about Pro’s and Con’s of gpu rendering and Mantra. Its also nice to hear that someone is speaking about the need of propper diffusion of images to get away with the crystal clear sharpness of modern renderings. What i find most underrated, as a topic, is that nobody is really talking about OCIO – espacially about the Academy Color Enconding System (ACES). It helps so much to reduce the amount of work when you mix different cameras of real photography and it helps thousand times more when you mix cgi and real photography via vfx compositing. I believe, its absolutely essential for perfect delivery of modern cgi/vfx images today, but only a small amount of people are talking about it. About deep blacks and deep whites. The difference between a ACES image delivered in 10 bit and 8 bit is big. I believe what i have learned in the Color Grading Communities: that 8 bit flat looks are a thing from the past. Not only because modern TV’s or Smartphones all ready displaying 10 bit content(HDR), but also most influential artists and peoples around the world are producing selfies, “high end” filmic content – recorded on modern smartphones or hdr cameras. This has a deep impact on how we look at photos and films today an in the future. A lot of people render at 8 bit or deliver 8 bit content in a flat log-color look which is not the future anymore, because Instagram/Youtube and Vimeo have introduced 10 bit videos and the difference is really big also Netflix is distributing 10 bit Codecs. So maybe you guys can someday talk about that too, because i find it very important for Artists and small Studios to talk about that and to introduce ACES to the majority of Houdini Users because it will play a much much bigger role in the month and years to come. Cheers
Which director were you talking about at 19:49— used max. diffusion “70s soft porn”? Fascinating part of your discussion. Thanks!
now that you acept that mantra is slow in a public video, side effects wont invite you anymore to their conferences on vengance haahhaha :p
rendering is a hard topic with a lot of pro’s and con’s. the main target is always can you produce the desired quality on time.
it would be interesting if H17 will have some updates that reduces the render time significant. we will see that in two weeks…..
btw. did somebody try this function in the mantra rop? (https://www.sidefx.com/docs/houdini/render/batch.html#simple-network-rendering)
really nice to hear all these informations about different render engines. I’d love to know more about arnold, where is arnold compared to mantra, since they are (at the moment) both cpu engines? I understood arnold is faster than mantra for volumes for examples, do you agree?
Yes, Arnold is very fast with volumes and massive amounts of geometry. It’s the fastest CPU renderer I’ve used, and the visual results are insanely good. We use it in our Architecture studio with Houdini, and the higher ups are shocked by the quality. The previews of Arnold GPU that I saw at Siggraph look solid. There’s just some changes that need to be made by nVidia to the Optix API to facilitate the very complex shader graphs that artists tend to build in Arnold.
Overall, I highly recommend Arnold over all other renderers.
Tried Arnold. It wasn’t for me. All I can say about it 🙂 Cheers, Mo
Great talk! When it comes to diffusion i think there is a change going on right now. Peter Jacksons Hobbit was far ahead of its time with the 60fps and people complaining it looks like soap opera. But actually it does look more real and less like film. We will not hold on to the classic anologue filmic look for too long when we can have it look more like reality – less blurry, more sharp, with higher dynamic range (human eye can do what like 20 f-stops?). Film always wanted to look more like reality – while at the moment it has stopped and with our new digital technology we try to look like old film. This is just like the times when we made synthesizers to emulate the sound of bells or church organs before we made more crazy and unparalleled sounds. We are in the process of getting accustomed to a high frame rate look by video games, youtube and flat screen tvs (every flat screen tv nowadays default setting is to kill all motion blur with that pixel perfect motion thingy).
Now the question for the artists is: are we taking part in keeping this old stuff alive? Or are we on the forefront pushing a new look.
I’m all about more vibrant colors and better contrast but when it comes to “killing that motion blur” like you mentioned Michael I think it’s the move in the wrong direction. Human’s eye doesn’t see all details of moving objects so why to show them on TV? Motion blur is natural to us and when I see every bit of detail in a fast action scene I got a bad feeling about it. It seems very artificial to me. Even though I can see more I feel that something is lacking.
Concerning renderers I got experience with Vray and Redshift. I would suggest using Vray as a CPU renderer only. Especially when the scenes get heavy over 40mln polygons. I used it in 3dsmax mainly and lately, I tried it in Houdini as a beta. When it comes down to Houdini I would suggest to get Redshift and don’t even bother with other engines. You don’t only get hyper fast results but displacements are rendered amazingly quick (comparing to Vray RT which is crashing when you only start to think about it). Translating the heavy scenes to GPU using Vray RT is a pain. It defeats the purpose of using even 4 GPUs. I found out that 2 Xeons are much faster in rendering longer animation in complex scene than 4x 1080 Ti just because the geo transfer takes to long every frame. And there is always a GPU RAM limit with Vray RT which is non existant in Redshift.
Vray in Maya or 3dsmax is able to produce great results and I always liked this engine but I was using it with CPU mostly. Now I render with Redshift not only because it gives me great results fast and can use the main RAM but also because I can use it with all the software I own and pay for the license once.
Moritz, Manuel, it’s about time you both check out the new 3Delight NSI!
It’s not available in Houdini (yet), but the integration in Maya and Katana is already done. Expect unbeatable quality (the very best sampling in the industry, by far), performance (simply the fastest renderer to render shots), and the most intuitive workflow (tiny parameterization and none of that artist-unfriendly bullshit). Oh, and did I mention the 3Delight NSI cloud? Gentlemen: fasten your seatbelts.
They started Houdini’s branch.
That spider though, that surely wasn’t rendered hahaha!
You need to look at 3delight’s new NSI renderer. The fastet thing out there by far. Way in front of the others tech-wise. Maybe SPI branch of Arnold is the closest, but still a ways off.
did you bought your sofa from ikea ?
I too would like more info an Aces.
Interesting to see that MacPro under the desk. I have one…loved and used the hell out of it for 8 years. But now it sits in a box. Switched to Windows.