It depends on how good the original is as to whether it will look better in HD. Film is not the same as video so you can’t say super 8 is the same size as SD. Video is made up of square dots so we can easily put numbers on it. Film is particles formed from light hitting chemicals so is pretty random – hence the “grain”. Obviously a bigger film size will have more info than a smaller one – as in 16MM has more info than 8MM etc (pretty obvious).
The idea of scanning in higher quality is really a “just in case” thing. If there is more info on the film than video can store in 720×576 pixels, then if you scan it at higher quality, you will get the extra info. I would personally always scan at the best quality and then downsize it afterwards as the scanning is the tedious bit which also risks the film. Once you have a 2K master you can decide whether to make an HD, SD or 2K file out of it.
Presumably you just send customers MP4 files these days? If you say you can send them an HD file it “sounds better” than an SD one to me, even if it looks the same. They won’t know either as they will only see the final thing. If making a DVD for them it may be different, but I would still just do the higher quality on principle and then decide what to do with it afterwards.
The other question is the format in which you save it. Again, on principle I would use the best you can as long as it doesn’t cause you problems.
The best would be two things:
- Not compressed, or compressed in such a way that nothing has been lost (like a zip file)
- Have lots of colour info – 24 bit rather than 16 bit or 8 bit.
An AVI or MOV can be just as good as a sequence of images – it depends on how the AVI or MOV is saved. If it is 24 bit uncompressed it is the same as images which are 24 bit uncompressed – just easier to use since its one video clip not thousands of images.
You can load an image sequence with 1000s of images to EDIUS but it takes ages and makes it longer to open the project every time you do so. That’s why I convert the images to an AVI after loading. If my images are 24 bit uncompressed and I convert them to Grass Valley HQX – which is 10 bit and compressed – then I do loose something. However, I do it because the amount a loose is not much, but the gain is a file that is easier to work with.
Grass Valley have a format called “lossless” which looses nothing, as its name suggests, although the colour does go from 24 bit to 8 bit or 10 bit. It still looses less than EDIUS HQX. I do not use lossless much because only EDIUS understands these, so no good if I am going to other editing or compositing programs.
A lot of the time you cannot see the difference between 24 bit, 10 bit and 8 bit – we all watch things on 8 but screens anyway! When you do notice is when you try to change it – brighten it up or adjust the colour – or do things like chromakeying. A higher bit depth and less compressed will give cleaner edges and more details in shadows. Or if you make the shadows lighter you are lot less likely so suddenly see crap in the shadows caused by compression.
So will a series of PNGs or bitmaps be better than an AVI or MOV? It depends on the format of AVI or MOV and the bit depth. Will you notice the difference? Very hard to tell as it depends on how much you do with it. In your case maybe not. In the case of a feature film where they change it, grade it, add effects and fiddle to death, and it is finally shown on a 30ft screen, probably.
So in principle work with the best you can “just in case”. It is really annoying to brighten up some shadows only to have noise which you then need to fix, and noticeable bands of colour (which happens when there is not as much colour info in the first place) which would not be there if it was uncompressed 24 bit. How do you know if you see these things it is caused by the format and not something else – you probably don’t unless you can do the same thing on an uncompressed version and notice it works better. However, if you work on an uncompressed version you know you have not made it worse by compressing it.
The reason we don’t work on uncompressed 24bit all the time is that it can be a pain in the neck. It takes up lots of space, you probably cannot run 24bit 2K off a normal drive, and thousands of still images is a pain to manage.
BMP vs PNG
These probably both do the same quality. I use PNGs because I do a lot of animation and they can save an alpha channel (transparency) as well as the picture which makes it easier to merge with real video. I have started using a format call EXR for my 3D stuff now as it saves more things – like the colour, shadows, alights and movement info as well as just the picture, EDIUS won’t load these things at all and even if it did won’t do anything with the extra info. It does make much bigger files which is a pain.
As your stuff won’t have transparency either BMP or PNG will do.
Slight difference is saving a still from and EDIUS timeline
As a light aside, I have noticed that if I take a still in EDIUS (or Resolve) and then bring it back and put it on the video, the brightness levels are different to the original video – when you think it should be exactly the same. If I do a TIFF the still is is the same as the video. This is to do with how EDIUS and Resolve choose to interpret levels with different types of images. This happens if I use PNG or BMP so I have now started doing stills from EDIUS as Tiff. In your case it won’t matter which you use. This is only something that pops up when saving a still from EDIUS and I might mention it in a future tutorial on the web.
Most image formats will be the same quality – some do higher bit depths than other but most are uncompressed or use lossless forms of compression. JPEG is the one that can actually make stuff worse. At its highest settings there is barely any difference but at lower ones you get wiggly lines and other crap caused by the compression. However, the file is a lot smaller which is why it is used for web images and phones. Using JPEG as a format can loose a lot more than going from 10 bit to 8 bit.
What about sound?
If the cine film has sound what happens to that if you make a BMP or PNG? If you do an AVI it is all in one file and in sync.
What about the frame rate?
What frame rate video clips does your scanner make? Cine would be 18fps or 24fps. Bring a bunch of stills in to EDIUS and it will automatically make it the same as the project – so an 18 fps clip will run at 25fps – i.e. too fast and will not match the sound. You can change this in the properties but it is still something to look out for.
How do you current change the frame rate?
EDIUS will use the frame interpolation to change from 18 to 24 to the project frame rate. Nearest neighbour will be the sharpest but jerky, frame blending will be more blurry but smoother, and most of the time optical flow will do the best job but may create odd stuff occasionally. I do have other tools to do stuff like this – such as Topaz Video AI – which I would be interested to try on my own footage (if my footage is still usable), hence in my other email saying if you send me just the raw stuff I will fiddle.
Compression of video
There are lots of ways of compressing video and most are like JPEG – i.e. they make the image worse. This is because video takes up a lot of space. So they invented hundreds of different formats all with good things and bad things. Some are pretty similar – ProRes is pretty much the same as Grass Valley HQ, and similar to Avid DNX. GV had white papers on why GV HQ was better and clever so made smaller files with the same info. Apart from looking at them it is pretty hard to tell which is better.
One way to compare clips is to take a clip made in two different ways – say uncompressed and then an MP4 made from the uncompressed clip. Put the MP4 on top of the uncompressed clip in EDIUS and then put a “difference” blend mode on the MP4. If the result is black they are the same. If the result has lots of random dots or lines (I normally see glowing edges) then the MP4 has lost something compared to the uncompressed.
This won’t work if you do a film scan as pictures and another scan as a MOV, as I assume there will always be slight difference when you scan because no equipment is perfect, but if you scanned as images and then made an HQ or made an MP4 then you could see the differences.
You might say “if I have to stick an effect on it to see the difference, does is matter”. The answer does come back to what are you going to do next. If colour correcting, keying or brightening up those differences may cause you a problem. They may even cause some extra non-sense when making an MP4 (although probably not). This is why you stick to the best quality where it does not cause a pain.
Anyway that’s a very long explanation. I may use this as a thing on the website or a video as I get asked about it a lot. Hope it explains stuff a bit.