top of page
  • joshknatt

Refining colour and 2D Tracking: Deeper skills development

Updated: Apr 20, 2022



This week we began to look at more specific tools and ways in which Nuke can be used to correct colour using different layers of an image with a shuffle node and how to “flatten” an image through the Lens Distortion node and the creation of ST Maps.


Deeper Colour Correction – Shuffle Nodes and Nukes Wipe

Nuke has access to a large array of nodes that can be used to help with specific colour correction through a range of channels. However, before this can be done, it is important to ensure that the image that is being worked with has access to an Alpha channel – this can be done using a shuffle node, which has an array of functions that can be used to manipulate specific layers on an image.


If an image file doesn’t have a working alpha, the shuffle node can be used to create one. Applying the Premult after shuffle will engage the alpha properly. This can ensure that you are able to not just manipulate the image to reduce opacity, especially useful if you need to remove a background from the image so that it can be placed in the scene and look realistic. It is also possible to use a shuffle node to directly layer a separate Alpha into an image.



When explore the same scene from different angles, it’s also important to ensure that both shots have the same colouring. To that end it is important to use patch references of the white and black points to enable this to effectively happen, otherwise a series of shots will look as though they do not flow effectively. Using the grade node to correct colours in a scene using the originals white and black points, then pulling the gain and lift from a reference.



Retiming and Time Offsets


When dealing with a sequence (video file) in Nuke, it is easier to render the footage as a series of image files as opposed to a video sequence. This is important for a range of reasons, however in some additional research I stumbled across a Reddit thread (not a hugely reliable source most of the time, however this thread had lots of good points from people that seemed to understand the Nuke workflow processes - https://www.reddit.com/r/NukeVFX/comments/f4wg1x/a_lot_of_nuke_artists_advice_to_work_with_image/) that highlighted 2 of the main reasons:

1. File size – when working with a video file, these are often far larger and in the rendering of a completed composition this increases the risk of the comp crashing, but also makes it easier for the files to be accessed by anyone working on the comp without the loading and rendering process being too heavy on the systems RAM.

2. Errors and crashes in final render – Leaving file size, another major reason is when a file is being rendered out for post-production, if there are issues with specific frames, it allows you to re-render these ones specifically saving time in the final rendering should something go awry.


When rendering a piece of video out as a series of high-quality image files (for this unit and my final project these will likely be HQ JPEGs) there are 2 options available:

1. The Time Offset – Used to set a specific start frame (useful if you have some unwanted frames at the start of a sequence)



2. The Retime Node - used to set a specific set of frames (input range) that will be played and sets them to that number of frames starting at 1 (output range): the output range must match input range’s number of frames.



Once you’ve used these nodes to capture the required frames, you then simply redirect the file pathway to the image sequence (ensuring on the write node that you have it set to [sequence name].###.jpg) for that to be brought into to nuke. Alternatively, you can drag in the folder with the image files in, ensuring to set the file pathway to a relative path.

2D Match Moving – Lens Distortion and ST Maps


When working with live action footage, there will always be Lens Distortion (LD) applied due to the focal length of the camera being used. This LD works to add a “rounding” to the footage based on the length of the camera.

The “flatten” out the footage, it is important that you compensate for this LD using the Lens Distortion Node to create an ST Map that can be applied to the footage.


To do this effectively, you need to know certain information about the camera (namely the make, type and chiefly the focal length). This will ensure that you are able to apply the correct Lens Distortion Grid that will appropriately “fix” the Lens Distortion applied to the footage.



When setting up the ST Map using the Lens Distortion, there will most likely be a need to complete the track lines on the grid using the add lines and feature’s function – this is important as the points need to be plotted as accurately as possible to ensure the image is as “flat” as possible.


Once this has been done, the settings need to be set for the ST Map and it will need to be exported as an .exr file so that it will effectively work with the footage.



Following the addition of an ST Map into the workflow, adding an Undistort node to the ensure that the footage you are using is no longer having LD applied to it. This will make the 3D Camera Tracking and Maya work more accurate and ensure that all assets are properly in position, which I will be looking at next week!



Further reading and research

To further my understanding of some of this weeks themes, I’ve already mentioned the less than reliable Reddit post I looked at to deepen my understanding of why we write video footage out as a series of images (Available here - https://www.reddit.com/r/NukeVFX/comments/f4wg1x/a_lot_of_nuke_artists_advice_to_work_with_image/ )


However, I wanted to do some more research into correcting Lens Distortion and ways that you could really make sure you were getting the best possible outcomes, therefore I watched the following 2 youtube tutorials that gave a bit more information around both the Lens Distortion process https://www.youtube.com/watch?v=5QN356F0BsI and how you can make some improvements to the workflow https://www.youtube.com/watch?v=-bBPRPmL7ZA


The Foundry video showed me how I can use both the grid and then line drawing in a scene to improve the overall undistort quality. https://www.youtube.com/watch?v=0jKA9porVoA


Following these I then viewed this video that went into to further depth for how to change Lens Distortion in a sequence, including how to add points to alter the bounding box and over scan rendering for Maya.

3 views0 comments

Comments


bottom of page